Sample records for discrete complex-image method

  1. Digital double random amplitude image encryption method based on the symmetry property of the parametric discrete Fourier transform

    NASA Astrophysics Data System (ADS)

    Bekkouche, Toufik; Bouguezel, Saad

    2018-03-01

    We propose a real-to-real image encryption method. It is a double random amplitude encryption method based on the parametric discrete Fourier transform coupled with chaotic maps to perform the scrambling. The main idea behind this method is the introduction of a complex-to-real conversion by exploiting the inherent symmetry property of the transform in the case of real-valued sequences. This conversion allows the encrypted image to be real-valued instead of being a complex-valued image as in all existing double random phase encryption methods. The advantage is to store or transmit only one image instead of two images (real and imaginary parts). Computer simulation results and comparisons with the existing double random amplitude encryption methods are provided for peak signal-to-noise ratio, correlation coefficient, histogram analysis, and key sensitivity.

  2. Adaptive Microwave Staring Correlated Imaging for Targets Appearing in Discrete Clusters.

    PubMed

    Tian, Chao; Jiang, Zheng; Chen, Weidong; Wang, Dongjin

    2017-10-21

    Microwave staring correlated imaging (MSCI) can achieve ultra-high resolution in real aperture staring radar imaging using the correlated imaging process (CIP) under all-weather and all-day circumstances. The CIP must combine the received echo signal with the temporal-spatial stochastic radiation field. However, a precondition of the CIP is that the continuous imaging region must be discretized to a fine grid, and the measurement matrix should be accurately computed, which makes the imaging process highly complex when the MSCI system observes a wide area. This paper proposes an adaptive imaging approach for the targets in discrete clusters to reduce the complexity of the CIP. The approach is divided into two main stages. First, as discrete clustered targets are distributed in different range strips in the imaging region, the transmitters of the MSCI emit narrow-pulse waveforms to separate the echoes of the targets in different strips in the time domain; using spectral entropy, a modified method robust against noise is put forward to detect the echoes of the discrete clustered targets, based on which the strips with targets can be adaptively located. Second, in a strip with targets, the matched filter reconstruction algorithm is used to locate the regions with targets, and only the regions of interest are discretized to a fine grid; sparse recovery is used, and the band exclusion is used to maintain the non-correlation of the dictionary. Simulation results are presented to demonstrate that the proposed approach can accurately and adaptively locate the regions with targets and obtain high-quality reconstructed images.

  3. Efficacy Evaluation of Different Wavelet Feature Extraction Methods on Brain MRI Tumor Detection

    NASA Astrophysics Data System (ADS)

    Nabizadeh, Nooshin; John, Nigel; Kubat, Miroslav

    2014-03-01

    Automated Magnetic Resonance Imaging brain tumor detection and segmentation is a challenging task. Among different available methods, feature-based methods are very dominant. While many feature extraction techniques have been employed, it is still not quite clear which of feature extraction methods should be preferred. To help improve the situation, we present the results of a study in which we evaluate the efficiency of using different wavelet transform features extraction methods in brain MRI abnormality detection. Applying T1-weighted brain image, Discrete Wavelet Transform (DWT), Discrete Wavelet Packet Transform (DWPT), Dual Tree Complex Wavelet Transform (DTCWT), and Complex Morlet Wavelet Transform (CMWT) methods are applied to construct the feature pool. Three various classifiers as Support Vector Machine, K Nearest Neighborhood, and Sparse Representation-Based Classifier are applied and compared for classifying the selected features. The results show that DTCWT and CMWT features classified with SVM, result in the highest classification accuracy, proving of capability of wavelet transform features to be informative in this application.

  4. Log-Gabor Energy Based Multimodal Medical Image Fusion in NSCT Domain

    PubMed Central

    Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan

    2014-01-01

    Multimodal medical image fusion is a powerful tool in clinical applications such as noninvasive diagnosis, image-guided radiotherapy, and treatment planning. In this paper, a novel nonsubsampled Contourlet transform (NSCT) based method for multimodal medical image fusion is presented, which is approximately shift invariant and can effectively suppress the pseudo-Gibbs phenomena. The source medical images are initially transformed by NSCT followed by fusing low- and high-frequency components. The phase congruency that can provide a contrast and brightness-invariant representation is applied to fuse low-frequency coefficients, whereas the Log-Gabor energy that can efficiently determine the frequency coefficients from the clear and detail parts is employed to fuse the high-frequency coefficients. The proposed fusion method has been compared with the discrete wavelet transform (DWT), the fast discrete curvelet transform (FDCT), and the dual tree complex wavelet transform (DTCWT) based image fusion methods and other NSCT-based methods. Visually and quantitatively experimental results indicate that the proposed fusion method can obtain more effective and accurate fusion results of multimodal medical images than other algorithms. Further, the applicability of the proposed method has been testified by carrying out a clinical example on a woman affected with recurrent tumor images. PMID:25214889

  5. Discrete Fourier Transform in a Complex Vector Space

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H. (Inventor)

    2015-01-01

    An image-based phase retrieval technique has been developed that can be used on board a space based iterative transformation system. Image-based wavefront sensing is computationally demanding due to the floating-point nature of the process. The discrete Fourier transform (DFT) calculation is presented in "diagonal" form. By diagonal we mean that a transformation of basis is introduced by an application of the similarity transform of linear algebra. The current method exploits the diagonal structure of the DFT in a special way, particularly when parts of the calculation do not have to be repeated at each iteration to converge to an acceptable solution in order to focus an image.

  6. Numerical computation of diffusion on a surface.

    PubMed

    Schwartz, Peter; Adalsteinsson, David; Colella, Phillip; Arkin, Adam Paul; Onsum, Matthew

    2005-08-09

    We present a numerical method for computing diffusive transport on a surface derived from image data. Our underlying discretization method uses a Cartesian grid embedded boundary method for computing the volume transport in a region consisting of all points a small distance from the surface. We obtain a representation of this region from image data by using a front propagation computation based on level set methods for solving the Hamilton-Jacobi and eikonal equations. We demonstrate that the method is second-order accurate in space and time and is capable of computing solutions on complex surface geometries obtained from image data of cells.

  7. Image compression system and method having optimized quantization tables

    NASA Technical Reports Server (NTRS)

    Ratnakar, Viresh (Inventor); Livny, Miron (Inventor)

    1998-01-01

    A digital image compression preprocessor for use in a discrete cosine transform-based digital image compression device is provided. The preprocessor includes a gathering mechanism for determining discrete cosine transform statistics from input digital image data. A computing mechanism is operatively coupled to the gathering mechanism to calculate a image distortion array and a rate of image compression array based upon the discrete cosine transform statistics for each possible quantization value. A dynamic programming mechanism is operatively coupled to the computing mechanism to optimize the rate of image compression array against the image distortion array such that a rate-distortion-optimal quantization table is derived. In addition, a discrete cosine transform-based digital image compression device and a discrete cosine transform-based digital image compression and decompression system are provided. Also, a method for generating a rate-distortion-optimal quantization table, using discrete cosine transform-based digital image compression, and operating a discrete cosine transform-based digital image compression and decompression system are provided.

  8. Enhancement of Satellite Image Compression Using a Hybrid (DWT-DCT) Algorithm

    NASA Astrophysics Data System (ADS)

    Shihab, Halah Saadoon; Shafie, Suhaidi; Ramli, Abdul Rahman; Ahmad, Fauzan

    2017-12-01

    Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) image compression techniques have been utilized in most of the earth observation satellites launched during the last few decades. However, these techniques have some issues that should be addressed. The DWT method has proven to be more efficient than DCT for several reasons. Nevertheless, the DCT can be exploited to improve the high-resolution satellite image compression when combined with the DWT technique. Hence, a proposed hybrid (DWT-DCT) method was developed and implemented in the current work, simulating an image compression system on-board on a small remote sensing satellite, with the aim of achieving a higher compression ratio to decrease the onboard data storage and the downlink bandwidth, while avoiding further complex levels of DWT. This method also succeeded in maintaining the reconstructed satellite image quality through replacing the standard forward DWT thresholding and quantization processes with an alternative process that employed the zero-padding technique, which also helped to reduce the processing time of DWT compression. The DCT, DWT and the proposed hybrid methods were implemented individually, for comparison, on three LANDSAT 8 images, using the MATLAB software package. A comparison was also made between the proposed method and three other previously published hybrid methods. The evaluation of all the objective and subjective results indicated the feasibility of using the proposed hybrid (DWT-DCT) method to enhance the image compression process on-board satellites.

  9. Infrared and visual image fusion method based on discrete cosine transform and local spatial frequency in discrete stationary wavelet transform domain

    NASA Astrophysics Data System (ADS)

    Jin, Xin; Jiang, Qian; Yao, Shaowen; Zhou, Dongming; Nie, Rencan; Lee, Shin-Jye; He, Kangjian

    2018-01-01

    In order to promote the performance of infrared and visual image fusion and provide better visual effects, this paper proposes a hybrid fusion method for infrared and visual image by the combination of discrete stationary wavelet transform (DSWT), discrete cosine transform (DCT) and local spatial frequency (LSF). The proposed method has three key processing steps. Firstly, DSWT is employed to decompose the important features of the source image into a series of sub-images with different levels and spatial frequencies. Secondly, DCT is used to separate the significant details of the sub-images according to the energy of different frequencies. Thirdly, LSF is applied to enhance the regional features of DCT coefficients, and it can be helpful and useful for image feature extraction. Some frequently-used image fusion methods and evaluation metrics are employed to evaluate the validity of the proposed method. The experiments indicate that the proposed method can achieve good fusion effect, and it is more efficient than other conventional image fusion methods.

  10. Subresolution Displacements in Finite Difference Simulations of Ultrasound Propagation and Imaging.

    PubMed

    Pinton, Gianmarco F

    2017-03-01

    Time domain finite difference simulations are used extensively to simulate wave propagation. They approximate the wave field on a discrete domain with a grid spacing that is typically on the order of a tenth of a wavelength. The smallest displacements that can be modeled by this type of simulation are thus limited to discrete values that are integer multiples of the grid spacing. This paper presents a method to represent continuous and subresolution displacements by varying the impedance of individual elements in a multielement scatterer. It is demonstrated that this method removes the limitations imposed by the discrete grid spacing by generating a continuum of displacements as measured by the backscattered signal. The method is first validated on an ideal perfect correlation case with a single scatterer. It is subsequently applied to a more complex case with a field of scatterers that model an acoustic radiation force-induced displacement used in ultrasound elasticity imaging. A custom finite difference simulation tool is used to simulate propagation from ultrasound imaging pulses in the scatterer field. These simulated transmit-receive events are then beamformed into images, which are tracked with a correlation-based algorithm to determine the displacement. A linear predictive model is developed to analytically describe the relationship between element impedance and backscattered phase shift. The error between model and simulation is λ/ 1364 , where λ is the acoustical wavelength. An iterative method is also presented that reduces the simulation error to λ/ 5556 over one iteration. The proposed technique therefore offers a computationally efficient method to model continuous subresolution displacements of a scattering medium in ultrasound imaging. This method has applications that include ultrasound elastography, blood flow, and motion tracking. This method also extends generally to finite difference simulations of wave propagation, such as electromagnetic or seismic waves.

  11. Local/non-local regularized image segmentation using graph-cuts: application to dynamic and multispectral MRI.

    PubMed

    Hanson, Erik A; Lundervold, Arvid

    2013-11-01

    Multispectral, multichannel, or time series image segmentation is important for image analysis in a wide range of applications. Regularization of the segmentation is commonly performed using local image information causing the segmented image to be locally smooth or piecewise constant. A new spatial regularization method, incorporating non-local information, was developed and tested. Our spatial regularization method applies to feature space classification in multichannel images such as color images and MR image sequences. The spatial regularization involves local edge properties, region boundary minimization, as well as non-local similarities. The method is implemented in a discrete graph-cut setting allowing fast computations. The method was tested on multidimensional MRI recordings from human kidney and brain in addition to simulated MRI volumes. The proposed method successfully segment regions with both smooth and complex non-smooth shapes with a minimum of user interaction.

  12. Aorta modeling with the element-based zero-stress state and isogeometric discretization

    NASA Astrophysics Data System (ADS)

    Takizawa, Kenji; Tezduyar, Tayfun E.; Sasaki, Takafumi

    2017-02-01

    Patient-specific arterial fluid-structure interaction computations, including aorta computations, require an estimation of the zero-stress state (ZSS), because the image-based arterial geometries do not come from a ZSS. We have earlier introduced a method for estimation of the element-based ZSS (EBZSS) in the context of finite element discretization of the arterial wall. The method has three main components. 1. An iterative method, which starts with a calculated initial guess, is used for computing the EBZSS such that when a given pressure load is applied, the image-based target shape is matched. 2. A method for straight-tube segments is used for computing the EBZSS so that we match the given diameter and longitudinal stretch in the target configuration and the "opening angle." 3. An element-based mapping between the artery and straight-tube is extracted from the mapping between the artery and straight-tube segments. This provides the mapping from the arterial configuration to the straight-tube configuration, and from the estimated EBZSS of the straight-tube configuration back to the arterial configuration, to be used as the initial guess for the iterative method that matches the image-based target shape. Here we present the version of the EBZSS estimation method with isogeometric wall discretization. With isogeometric discretization, we can obtain the element-based mapping directly, instead of extracting it from the mapping between the artery and straight-tube segments. That is because all we need for the element-based mapping, including the curvatures, can be obtained within an element. With NURBS basis functions, we may be able to achieve a similar level of accuracy as with the linear basis functions, but using larger-size and much fewer elements. Higher-order NURBS basis functions allow representation of more complex shapes within an element. To show how the new EBZSS estimation method performs, we first present 2D test computations with straight-tube configurations. Then we show how the method can be used in a 3D computation where the target geometry is coming from medical image of a human aorta.

  13. Fast frequency domain method to detect skew in a document image

    NASA Astrophysics Data System (ADS)

    Mehta, Sunita; Walia, Ekta; Dutta, Maitreyee

    2015-12-01

    In this paper, a new fast frequency domain method based on Discrete Wavelet Transform and Fast Fourier Transform has been implemented for the determination of the skew angle in a document image. Firstly, image size reduction is done by using two-dimensional Discrete Wavelet Transform and then skew angle is computed using Fast Fourier Transform. Skew angle error is almost negligible. The proposed method is experimented using a large number of documents having skew between -90° and +90° and results are compared with Moments with Discrete Wavelet Transform method and other commonly used existing methods. It has been determined that this method works more efficiently than the existing methods. Also, it works with typed, picture documents having different fonts and resolutions. It overcomes the drawback of the recently proposed method of Moments with Discrete Wavelet Transform that does not work with picture documents.

  14. Infrared images target detection based on background modeling in the discrete cosine domain

    NASA Astrophysics Data System (ADS)

    Ye, Han; Pei, Jihong

    2018-02-01

    Background modeling is the critical technology to detect the moving target for video surveillance. Most background modeling techniques are aimed at land monitoring and operated in the spatial domain. A background establishment becomes difficult when the scene is a complex fluctuating sea surface. In this paper, the background stability and separability between target are analyzed deeply in the discrete cosine transform (DCT) domain, on this basis, we propose a background modeling method. The proposed method models each frequency point as a single Gaussian model to represent background, and the target is extracted by suppressing the background coefficients. Experimental results show that our approach can establish an accurate background model for seawater, and the detection results outperform other background modeling methods in the spatial domain.

  15. Fast heap transform-based QR-decomposition of real and complex matrices: algorithms and codes

    NASA Astrophysics Data System (ADS)

    Grigoryan, Artyom M.

    2015-03-01

    In this paper, we describe a new look on the application of Givens rotations to the QR-decomposition problem, which is similar to the method of Householder transformations. We apply the concept of the discrete heap transform, or signal-induced unitary transforms which had been introduced by Grigoryan (2006) and used in signal and image processing. Both cases of real and complex nonsingular matrices are considered and examples of performing QR-decomposition of square matrices are given. The proposed method of QR-decomposition for the complex matrix is novel and differs from the known method of complex Givens rotation and is based on analytical equations for the heap transforms. Many examples illustrated the proposed heap transform method of QR-decomposition are given, algorithms are described in detail, and MATLAB-based codes are included.

  16. Fast parallel approach for 2-D DHT-based real-valued discrete Gabor transform.

    PubMed

    Tao, Liang; Kwan, Hon Keung

    2009-12-01

    Two-dimensional fast Gabor transform algorithms are useful for real-time applications due to the high computational complexity of the traditional 2-D complex-valued discrete Gabor transform (CDGT). This paper presents two block time-recursive algorithms for 2-D DHT-based real-valued discrete Gabor transform (RDGT) and its inverse transform and develops a fast parallel approach for the implementation of the two algorithms. The computational complexity of the proposed parallel approach is analyzed and compared with that of the existing 2-D CDGT algorithms. The results indicate that the proposed parallel approach is attractive for real time image processing.

  17. A method to perform a fast fourier transform with primitive image transformations.

    PubMed

    Sheridan, Phil

    2007-05-01

    The Fourier transform is one of the most important transformations in image processing. A major component of this influence comes from the ability to implement it efficiently on a digital computer. This paper describes a new methodology to perform a fast Fourier transform (FFT). This methodology emerges from considerations of the natural physical constraints imposed by image capture devices (camera/eye). The novel aspects of the specific FFT method described include: 1) a bit-wise reversal re-grouping operation of the conventional FFT is replaced by the use of lossless image rotation and scaling and 2) the usual arithmetic operations of complex multiplication are replaced with integer addition. The significance of the FFT presented in this paper is introduced by extending a discrete and finite image algebra, named Spiral Honeycomb Image Algebra (SHIA), to a continuous version, named SHIAC.

  18. Graph-cut based discrete-valued image reconstruction.

    PubMed

    Tuysuzoglu, Ahmet; Karl, W Clem; Stojanovic, Ivana; Castañòn, David; Ünlü, M Selim

    2015-05-01

    Efficient graph-cut methods have been used with great success for labeling and denoising problems occurring in computer vision. Unfortunately, the presence of linear image mappings has prevented the use of these techniques in most discrete-amplitude image reconstruction problems. In this paper, we develop a graph-cut based framework for the direct solution of discrete amplitude linear image reconstruction problems cast as regularized energy function minimizations. We first analyze the structure of discrete linear inverse problem cost functions to show that the obstacle to the application of graph-cut methods to their solution is the variable mixing caused by the presence of the linear sensing operator. We then propose to use a surrogate energy functional that overcomes the challenges imposed by the sensing operator yet can be utilized efficiently in existing graph-cut frameworks. We use this surrogate energy functional to devise a monotonic iterative algorithm for the solution of discrete valued inverse problems. We first provide experiments using local convolutional operators and show the robustness of the proposed technique to noise and stability to changes in regularization parameter. Then we focus on nonlocal, tomographic examples where we consider limited-angle data problems. We compare our technique with state-of-the-art discrete and continuous image reconstruction techniques. Experiments show that the proposed method outperforms state-of-the-art techniques in challenging scenarios involving discrete valued unknowns.

  19. ADAM: analysis of discrete models of biological systems using computer algebra.

    PubMed

    Hinkelmann, Franziska; Brandon, Madison; Guang, Bonny; McNeill, Rustin; Blekherman, Grigoriy; Veliz-Cuba, Alan; Laubenbacher, Reinhard

    2011-07-20

    Many biological systems are modeled qualitatively with discrete models, such as probabilistic Boolean networks, logical models, Petri nets, and agent-based models, to gain a better understanding of them. The computational complexity to analyze the complete dynamics of these models grows exponentially in the number of variables, which impedes working with complex models. There exist software tools to analyze discrete models, but they either lack the algorithmic functionality to analyze complex models deterministically or they are inaccessible to many users as they require understanding the underlying algorithm and implementation, do not have a graphical user interface, or are hard to install. Efficient analysis methods that are accessible to modelers and easy to use are needed. We propose a method for efficiently identifying attractors and introduce the web-based tool Analysis of Dynamic Algebraic Models (ADAM), which provides this and other analysis methods for discrete models. ADAM converts several discrete model types automatically into polynomial dynamical systems and analyzes their dynamics using tools from computer algebra. Specifically, we propose a method to identify attractors of a discrete model that is equivalent to solving a system of polynomial equations, a long-studied problem in computer algebra. Based on extensive experimentation with both discrete models arising in systems biology and randomly generated networks, we found that the algebraic algorithms presented in this manuscript are fast for systems with the structure maintained by most biological systems, namely sparseness and robustness. For a large set of published complex discrete models, ADAM identified the attractors in less than one second. Discrete modeling techniques are a useful tool for analyzing complex biological systems and there is a need in the biological community for accessible efficient analysis tools. ADAM provides analysis methods based on mathematical algorithms as a web-based tool for several different input formats, and it makes analysis of complex models accessible to a larger community, as it is platform independent as a web-service and does not require understanding of the underlying mathematics.

  20. Content Based Image Retrieval by Using Color Descriptor and Discrete Wavelet Transform.

    PubMed

    Ashraf, Rehan; Ahmed, Mudassar; Jabbar, Sohail; Khalid, Shehzad; Ahmad, Awais; Din, Sadia; Jeon, Gwangil

    2018-01-25

    Due to recent development in technology, the complexity of multimedia is significantly increased and the retrieval of similar multimedia content is a open research problem. Content-Based Image Retrieval (CBIR) is a process that provides a framework for image search and low-level visual features are commonly used to retrieve the images from the image database. The basic requirement in any image retrieval process is to sort the images with a close similarity in term of visually appearance. The color, shape and texture are the examples of low-level image features. The feature plays a significant role in image processing. The powerful representation of an image is known as feature vector and feature extraction techniques are applied to get features that will be useful in classifying and recognition of images. As features define the behavior of an image, they show its place in terms of storage taken, efficiency in classification and obviously in time consumption also. In this paper, we are going to discuss various types of features, feature extraction techniques and explaining in what scenario, which features extraction technique will be better. The effectiveness of the CBIR approach is fundamentally based on feature extraction. In image processing errands like object recognition and image retrieval feature descriptor is an immense among the most essential step. The main idea of CBIR is that it can search related images to an image passed as query from a dataset got by using distance metrics. The proposed method is explained for image retrieval constructed on YCbCr color with canny edge histogram and discrete wavelet transform. The combination of edge of histogram and discrete wavelet transform increase the performance of image retrieval framework for content based search. The execution of different wavelets is additionally contrasted with discover the suitability of specific wavelet work for image retrieval. The proposed algorithm is prepared and tried to implement for Wang image database. For Image Retrieval Purpose, Artificial Neural Networks (ANN) is used and applied on standard dataset in CBIR domain. The execution of the recommended descriptors is assessed by computing both Precision and Recall values and compared with different other proposed methods with demonstrate the predominance of our method. The efficiency and effectiveness of the proposed approach outperforms the existing research in term of average precision and recall values.

  1. Denoising embolic Doppler ultrasound signals using Dual Tree Complex Discrete Wavelet Transform.

    PubMed

    Serbes, Gorkem; Aydin, Nizamettin

    2010-01-01

    Early and accurate detection of asymptomatic emboli is important for monitoring of preventive therapy in stroke-prone patients. One of the problems in detection of emboli is the identification of an embolic signal caused by very small emboli. The amplitude of the embolic signal may be so small that advanced processing methods are required to distinguish these signals from Doppler signals arising from red blood cells. In this study instead of conventional discrete wavelet transform, the Dual Tree Complex Discrete Wavelet Transform was used for denoising embolic signals. Performances of both approaches were compared. Unlike the conventional discrete wavelet transform discrete complex wavelet transform is a shift invariant transform with limited redundancy. Results demonstrate that the Dual Tree Complex Discrete Wavelet Transform based denoising outperforms conventional discrete wavelet denoising. Approximately 8 dB improvement is obtained by using the Dual Tree Complex Discrete Wavelet Transform compared to the improvement provided by the conventional Discrete Wavelet Transform (less than 5 dB).

  2. Image compression-encryption algorithms by combining hyper-chaotic system with discrete fractional random transform

    NASA Astrophysics Data System (ADS)

    Gong, Lihua; Deng, Chengzhi; Pan, Shumin; Zhou, Nanrun

    2018-07-01

    Based on hyper-chaotic system and discrete fractional random transform, an image compression-encryption algorithm is designed. The original image is first transformed into a spectrum by the discrete cosine transform and the resulting spectrum is compressed according to the method of spectrum cutting. The random matrix of the discrete fractional random transform is controlled by a chaotic sequence originated from the high dimensional hyper-chaotic system. Then the compressed spectrum is encrypted by the discrete fractional random transform. The order of DFrRT and the parameters of the hyper-chaotic system are the main keys of this image compression and encryption algorithm. The proposed algorithm can compress and encrypt image signal, especially can encrypt multiple images once. To achieve the compression of multiple images, the images are transformed into spectra by the discrete cosine transform, and then the spectra are incised and spliced into a composite spectrum by Zigzag scanning. Simulation results demonstrate that the proposed image compression and encryption algorithm is of high security and good compression performance.

  3. Synthesis, Characterization, and Handling of Eu(II)-Containing Complexes for Molecular Imaging Applications

    NASA Astrophysics Data System (ADS)

    Basal, Lina A.; Allen, Matthew J.

    2018-03-01

    Considerable research effort has focused on the in vivo use of responsive imaging probes that change imaging properties upon reacting with oxygen because hypoxia is relevant to diagnosing, treating, and monitoring diseases. One promising class of compounds for oxygen-responsive imaging is Eu(II)-containing complexes because the Eu(II/III) redox couple enables imaging with multiple modalities including magnetic resonance and photoacoustic imaging. The use of Eu(II) requires care in handling to avoid unintended oxidation during synthesis and characterization. This review describes recent advances in the field of imaging agents based on discrete Eu(II)-containing complexes with specific focus on the synthesis, characterization, and handling of aqueous Eu(II)-containing complexes.

  4. A new Watermarking System based on Discrete Cosine Transform (DCT) in color biometric images.

    PubMed

    Dogan, Sengul; Tuncer, Turker; Avci, Engin; Gulten, Arif

    2012-08-01

    This paper recommend a biometric color images hiding approach An Watermarking System based on Discrete Cosine Transform (DCT), which is used to protect the security and integrity of transmitted biometric color images. Watermarking is a very important hiding information (audio, video, color image, gray image) technique. It is commonly used on digital objects together with the developing technology in the last few years. One of the common methods used for hiding information on image files is DCT method which used in the frequency domain. In this study, DCT methods in order to embed watermark data into face images, without corrupting their features.

  5. Method and apparatus for providing a seamless tiled display

    NASA Technical Reports Server (NTRS)

    Dubin, Matthew B. (Inventor); Johnson, Michael J. (Inventor)

    2002-01-01

    A display for producing a seamless composite image from at least two discrete images. The display includes one or more projectors for projecting each of the discrete images separately onto a screen such that at least one of the discrete images overlaps at least one other of the discrete images by more than 25 percent. The amount of overlap that is required to reduce the seams of the composite image to an acceptable level over a predetermined viewing angle depends on a number of factors including the field-of-view and aperture size of the projectors, the screen gain profile, etc. For rear-projection screens and some front projection screens, an overlap of more than 25 percent is acceptable.

  6. Video compression of coronary angiograms based on discrete wavelet transform with block classification.

    PubMed

    Ho, B T; Tsai, M J; Wei, J; Ma, M; Saipetch, P

    1996-01-01

    A new method of video compression for angiographic images has been developed to achieve high compression ratio (~20:1) while eliminating block artifacts which leads to loss of diagnostic accuracy. This method adopts motion picture experts group's (MPEGs) motion compensated prediction to takes advantage of frame to frame correlation. However, in contrast to MPEG, the error images arising from mismatches in the motion estimation are encoded by discrete wavelet transform (DWT) rather than block discrete cosine transform (DCT). Furthermore, the authors developed a classification scheme which label each block in an image as intra, error, or background type and encode it accordingly. This hybrid coding can significantly improve the compression efficiency in certain eases. This method can be generalized for any dynamic image sequences applications sensitive to block artifacts.

  7. ADAM: Analysis of Discrete Models of Biological Systems Using Computer Algebra

    PubMed Central

    2011-01-01

    Background Many biological systems are modeled qualitatively with discrete models, such as probabilistic Boolean networks, logical models, Petri nets, and agent-based models, to gain a better understanding of them. The computational complexity to analyze the complete dynamics of these models grows exponentially in the number of variables, which impedes working with complex models. There exist software tools to analyze discrete models, but they either lack the algorithmic functionality to analyze complex models deterministically or they are inaccessible to many users as they require understanding the underlying algorithm and implementation, do not have a graphical user interface, or are hard to install. Efficient analysis methods that are accessible to modelers and easy to use are needed. Results We propose a method for efficiently identifying attractors and introduce the web-based tool Analysis of Dynamic Algebraic Models (ADAM), which provides this and other analysis methods for discrete models. ADAM converts several discrete model types automatically into polynomial dynamical systems and analyzes their dynamics using tools from computer algebra. Specifically, we propose a method to identify attractors of a discrete model that is equivalent to solving a system of polynomial equations, a long-studied problem in computer algebra. Based on extensive experimentation with both discrete models arising in systems biology and randomly generated networks, we found that the algebraic algorithms presented in this manuscript are fast for systems with the structure maintained by most biological systems, namely sparseness and robustness. For a large set of published complex discrete models, ADAM identified the attractors in less than one second. Conclusions Discrete modeling techniques are a useful tool for analyzing complex biological systems and there is a need in the biological community for accessible efficient analysis tools. ADAM provides analysis methods based on mathematical algorithms as a web-based tool for several different input formats, and it makes analysis of complex models accessible to a larger community, as it is platform independent as a web-service and does not require understanding of the underlying mathematics. PMID:21774817

  8. The effect of SUV discretization in quantitative FDG-PET Radiomics: the need for standardized methodology in tumor texture analysis

    NASA Astrophysics Data System (ADS)

    Leijenaar, Ralph T. H.; Nalbantov, Georgi; Carvalho, Sara; van Elmpt, Wouter J. C.; Troost, Esther G. C.; Boellaard, Ronald; Aerts, Hugo J. W. L.; Gillies, Robert J.; Lambin, Philippe

    2015-08-01

    FDG-PET-derived textural features describing intra-tumor heterogeneity are increasingly investigated as imaging biomarkers. As part of the process of quantifying heterogeneity, image intensities (SUVs) are typically resampled into a reduced number of discrete bins. We focused on the implications of the manner in which this discretization is implemented. Two methods were evaluated: (1) RD, dividing the SUV range into D equally spaced bins, where the intensity resolution (i.e. bin size) varies per image; and (2) RB, maintaining a constant intensity resolution B. Clinical feasibility was assessed on 35 lung cancer patients, imaged before and in the second week of radiotherapy. Forty-four textural features were determined for different D and B for both imaging time points. Feature values depended on the intensity resolution and out of both assessed methods, RB was shown to allow for a meaningful inter- and intra-patient comparison of feature values. Overall, patients ranked differently according to feature values-which was used as a surrogate for textural feature interpretation-between both discretization methods. Our study shows that the manner of SUV discretization has a crucial effect on the resulting textural features and the interpretation thereof, emphasizing the importance of standardized methodology in tumor texture analysis.

  9. Multirate-based fast parallel algorithms for 2-D DHT-based real-valued discrete Gabor transform.

    PubMed

    Tao, Liang; Kwan, Hon Keung

    2012-07-01

    Novel algorithms for the multirate and fast parallel implementation of the 2-D discrete Hartley transform (DHT)-based real-valued discrete Gabor transform (RDGT) and its inverse transform are presented in this paper. A 2-D multirate-based analysis convolver bank is designed for the 2-D RDGT, and a 2-D multirate-based synthesis convolver bank is designed for the 2-D inverse RDGT. The parallel channels in each of the two convolver banks have a unified structure and can apply the 2-D fast DHT algorithm to speed up their computations. The computational complexity of each parallel channel is low and is independent of the Gabor oversampling rate. All the 2-D RDGT coefficients of an image are computed in parallel during the analysis process and can be reconstructed in parallel during the synthesis process. The computational complexity and time of the proposed parallel algorithms are analyzed and compared with those of the existing fastest algorithms for 2-D discrete Gabor transforms. The results indicate that the proposed algorithms are the fastest, which make them attractive for real-time image processing.

  10. Accurate D-bar Reconstructions of Conductivity Images Based on a Method of Moment with Sinc Basis.

    PubMed

    Abbasi, Mahdi

    2014-01-01

    Planar D-bar integral equation is one of the inverse scattering solution methods for complex problems including inverse conductivity considered in applications such as Electrical impedance tomography (EIT). Recently two different methodologies are considered for the numerical solution of D-bar integrals equation, namely product integrals and multigrid. The first one involves high computational burden and the other one suffers from low convergence rate (CR). In this paper, a novel high speed moment method based using the sinc basis is introduced to solve the two-dimensional D-bar integral equation. In this method, all functions within D-bar integral equation are first expanded using the sinc basis functions. Then, the orthogonal properties of their products dissolve the integral operator of the D-bar equation and results a discrete convolution equation. That is, the new moment method leads to the equation solution without direct computation of the D-bar integral. The resulted discrete convolution equation maybe adapted to a suitable structure to be solved using fast Fourier transform. This allows us to reduce the order of computational complexity to as low as O (N (2)log N). Simulation results on solving D-bar equations arising in EIT problem show that the proposed method is accurate with an ultra-linear CR.

  11. The Relation of Finite Element and Finite Difference Methods

    NASA Technical Reports Server (NTRS)

    Vinokur, M.

    1976-01-01

    Finite element and finite difference methods are examined in order to bring out their relationship. It is shown that both methods use two types of discrete representations of continuous functions. They differ in that finite difference methods emphasize the discretization of independent variable, while finite element methods emphasize the discretization of dependent variable (referred to as functional approximations). An important point is that finite element methods use global piecewise functional approximations, while finite difference methods normally use local functional approximations. A general conclusion is that finite element methods are best designed to handle complex boundaries, while finite difference methods are superior for complex equations. It is also shown that finite volume difference methods possess many of the advantages attributed to finite element methods.

  12. Efficient Construction of Discrete Adjoint Operators on Unstructured Grids by Using Complex Variables

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Kleb, William L.

    2005-01-01

    A methodology is developed and implemented to mitigate the lengthy software development cycle typically associated with constructing a discrete adjoint solver for aerodynamic simulations. The approach is based on a complex-variable formulation that enables straightforward differentiation of complicated real-valued functions. An automated scripting process is used to create the complex-variable form of the set of discrete equations. An efficient method for assembling the residual and cost function linearizations is developed. The accuracy of the implementation is verified through comparisons with a discrete direct method as well as a previously developed handcoded discrete adjoint approach. Comparisons are also shown for a large-scale configuration to establish the computational efficiency of the present scheme. To ultimately demonstrate the power of the approach, the implementation is extended to high temperature gas flows in chemical nonequilibrium. Finally, several fruitful research and development avenues enabled by the current work are suggested.

  13. Efficient Construction of Discrete Adjoint Operators on Unstructured Grids Using Complex Variables

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Kleb, William L.

    2005-01-01

    A methodology is developed and implemented to mitigate the lengthy software development cycle typically associated with constructing a discrete adjoint solver for aerodynamic simulations. The approach is based on a complex-variable formulation that enables straightforward differentiation of complicated real-valued functions. An automated scripting process is used to create the complex-variable form of the set of discrete equations. An efficient method for assembling the residual and cost function linearizations is developed. The accuracy of the implementation is verified through comparisons with a discrete direct method as well as a previously developed handcoded discrete adjoint approach. Comparisons are also shown for a large-scale configuration to establish the computational efficiency of the present scheme. To ultimately demonstrate the power of the approach, the implementation is extended to high temperature gas flows in chemical nonequilibrium. Finally, several fruitful research and development avenues enabled by the current work are suggested.

  14. Performance Evaluation of Frequency Transform Based Block Classification of Compound Image Segmentation Techniques

    NASA Astrophysics Data System (ADS)

    Selwyn, Ebenezer Juliet; Florinabel, D. Jemi

    2018-04-01

    Compound image segmentation plays a vital role in the compression of computer screen images. Computer screen images are images which are mixed with textual, graphical, or pictorial contents. In this paper, we present a comparison of two transform based block classification of compound images based on metrics like speed of classification, precision and recall rate. Block based classification approaches normally divide the compound images into fixed size blocks of non-overlapping in nature. Then frequency transform like Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) are applied over each block. Mean and standard deviation are computed for each 8 × 8 block and are used as features set to classify the compound images into text/graphics and picture/background block. The classification accuracy of block classification based segmentation techniques are measured by evaluation metrics like precision and recall rate. Compound images of smooth background and complex background images containing text of varying size, colour and orientation are considered for testing. Experimental evidence shows that the DWT based segmentation provides significant improvement in recall rate and precision rate approximately 2.3% than DCT based segmentation with an increase in block classification time for both smooth and complex background images.

  15. Energy minimization in medical image analysis: Methodologies and applications.

    PubMed

    Zhao, Feng; Xie, Xianghua

    2016-02-01

    Energy minimization is of particular interest in medical image analysis. In the past two decades, a variety of optimization schemes have been developed. In this paper, we present a comprehensive survey of the state-of-the-art optimization approaches. These algorithms are mainly classified into two categories: continuous method and discrete method. The former includes Newton-Raphson method, gradient descent method, conjugate gradient method, proximal gradient method, coordinate descent method, and genetic algorithm-based method, while the latter covers graph cuts method, belief propagation method, tree-reweighted message passing method, linear programming method, maximum margin learning method, simulated annealing method, and iterated conditional modes method. We also discuss the minimal surface method, primal-dual method, and the multi-objective optimization method. In addition, we review several comparative studies that evaluate the performance of different minimization techniques in terms of accuracy, efficiency, or complexity. These optimization techniques are widely used in many medical applications, for example, image segmentation, registration, reconstruction, motion tracking, and compressed sensing. We thus give an overview on those applications as well. Copyright © 2015 John Wiley & Sons, Ltd.

  16. Skeletonization and Partitioning of Digital Images Using Discrete Morse Theory.

    PubMed

    Delgado-Friedrichs, Olaf; Robins, Vanessa; Sheppard, Adrian

    2015-03-01

    We show how discrete Morse theory provides a rigorous and unifying foundation for defining skeletons and partitions of grayscale digital images. We model a grayscale image as a cubical complex with a real-valued function defined on its vertices (the voxel values). This function is extended to a discrete gradient vector field using the algorithm presented in Robins, Wood, Sheppard TPAMI 33:1646 (2011). In the current paper we define basins (the building blocks of a partition) and segments of the skeleton using the stable and unstable sets associated with critical cells. The natural connection between Morse theory and homology allows us to prove the topological validity of these constructions; for example, that the skeleton is homotopic to the initial object. We simplify the basins and skeletons via Morse-theoretic cancellation of critical cells in the discrete gradient vector field using a strategy informed by persistent homology. Simple working Python code for our algorithms for efficient vector field traversal is included. Example data are taken from micro-CT images of porous materials, an application area where accurate topological models of pore connectivity are vital for fluid-flow modelling.

  17. Image-based modeling of flow and reactive transport in porous media

    NASA Astrophysics Data System (ADS)

    Qin, Chao-Zhong; Hoang, Tuong; Verhoosel, Clemens V.; Harald van Brummelen, E.; Wijshoff, Herman M. A.

    2017-04-01

    Due to the availability of powerful computational resources and high-resolution acquisition of material structures, image-based modeling has become an important tool in studying pore-scale flow and transport processes in porous media [Scheibe et al., 2015]. It is also playing an important role in the upscaling study for developing macroscale porous media models. Usually, the pore structure of a porous medium is directly discretized by the voxels obtained from visualization techniques (e.g. micro CT scanning), which can avoid the complex generation of computational mesh. However, this discretization may considerably overestimate the interfacial areas between solid walls and pore spaces. As a result, it could impact the numerical predictions of reactive transport and immiscible two-phase flow. In this work, two types of image-based models are used to study single-phase flow and reactive transport in a porous medium of sintered glass beads. One model is from a well-established voxel-based simulation tool. The other is based on the mixed isogeometric finite cell method [Hoang et al., 2016], which has been implemented in the open source Nutils (http://www.nutils.org). The finite cell method can be used in combination with isogeometric analysis to enable the higher-order discretization of problems on complex volumetric domains. A particularly interesting application of this immersed simulation technique is image-based analysis, where the geometry is smoothly approximated by segmentation of a B-spline level set approximation of scan data [Verhoosel et al., 2015]. Through a number of case studies by the two models, we will show the advantages and disadvantages of each model in modeling single-phase flow and reactive transport in porous media. Particularly, we will highlight the importance of preserving high-resolution interfaces between solid walls and pore spaces in image-based modeling of porous media. References Hoang, T., C. V. Verhoosel, F. Auricchio, E. H. van Brummelen, and A. Reali (2016), Mixed Isogeometric Finite Cell Methods for the Stokes problem, Computer Methods in Applied Mechanics and Engineering, doi:10.1016/j.cma.2016.07.027. Scheibe, T. D., W. A. Perkins, M. C. Richmond, M. I. McKinley, P. D. J. Romero-Gomez, M. Oostrom, T. W. Wietsma, J. A. Serkowski, and J. M. Zachara (2015), Pore-scale and multiscale numerical simulation of flow and transport in a laboratory-scale column, Water Resources Research, 51(2), 1023-1035, doi:10.1002/2014WR015959. Verhoosel, C. V., G. J. van Zwieten, B. van Rietbergen, and R. de Borst (2015), Image-based goal-oriented adaptive isogeometric analysis with application to the micro-mechanical modeling of trabecular bone, Computer Methods in Applied Mechanics and Engineering, 284(February), 138-164, doi:10.1016/j.cma.2014.07.009.

  18. Steganographic embedding in containers-images

    NASA Astrophysics Data System (ADS)

    Nikishova, A. V.; Omelchenko, T. A.; Makedonskij, S. A.

    2018-05-01

    Steganography is one of the approaches to ensuring the protection of information transmitted over the network. But a steganographic method should vary depending on a used container. According to statistics, the most widely used containers are images and the most common image format is JPEG. Authors propose a method of data embedding into a frequency area of images in format JPEG 2000. It is proposed to use the method of Benham-Memon- Yeo-Yeung, in which instead of discrete cosine transform, discrete wavelet transform is used. Two requirements for images are formulated. Structure similarity is chosen to obtain quality assessment of data embedding. Experiments confirm that requirements satisfaction allows achieving high quality assessment of data embedding.

  19. Level Density in the Complex Scaling Method

    NASA Astrophysics Data System (ADS)

    Suzuki, R.; Myo, T.; Katō, K.

    2005-06-01

    It is shown that the continuum level density (CLD) at unbound energies can be calculated with the complex scaling method (CSM), in which the energy spectra of bound states, resonances and continuum states are obtained in terms of L(2) basis functions. In this method, the extended completeness relation is applied to the calculation of the Green functions, and the continuum-state part is approximately expressed in terms of discretized complex scaled continuum solutions. The obtained result is compared with the CLD calculated exactly from the scattering phase shift. The discretization in the CSM is shown to give a very good description of continuum states. We discuss how the scattering phase shifts can inversely be calculated from the discretized CLD using a basis function technique in the CSM.

  20. Shape complexes: the intersection of label orderings and star convexity constraints in continuous max-flow medical image segmentation

    PubMed Central

    Baxter, John S. H.; Inoue, Jiro; Drangova, Maria; Peters, Terry M.

    2016-01-01

    Abstract. Optimization-based segmentation approaches deriving from discrete graph-cuts and continuous max-flow have become increasingly nuanced, allowing for topological and geometric constraints on the resulting segmentation while retaining global optimality. However, these two considerations, topological and geometric, have yet to be combined in a unified manner. The concept of “shape complexes,” which combine geodesic star convexity with extendable continuous max-flow solvers, is presented. These shape complexes allow more complicated shapes to be created through the use of multiple labels and super-labels, with geodesic star convexity governed by a topological ordering. These problems can be optimized using extendable continuous max-flow solvers. Previous approaches required computationally expensive coordinate system warping, which are ill-defined and ambiguous in the general case. These shape complexes are demonstrated in a set of synthetic images as well as vessel segmentation in ultrasound, valve segmentation in ultrasound, and atrial wall segmentation from contrast-enhanced CT. Shape complexes represent an extendable tool alongside other continuous max-flow methods that may be suitable for a wide range of medical image segmentation problems. PMID:28018937

  1. Continuous analog of multiplicative algebraic reconstruction technique for computed tomography

    NASA Astrophysics Data System (ADS)

    Tateishi, Kiyoko; Yamaguchi, Yusaku; Abou Al-Ola, Omar M.; Kojima, Takeshi; Yoshinaga, Tetsuya

    2016-03-01

    We propose a hybrid dynamical system as a continuous analog to the block-iterative multiplicative algebraic reconstruction technique (BI-MART), which is a well-known iterative image reconstruction algorithm for computed tomography. The hybrid system is described by a switched nonlinear system with a piecewise smooth vector field or differential equation and, for consistent inverse problems, the convergence of non-negatively constrained solutions to a globally stable equilibrium is guaranteed by the Lyapunov theorem. Namely, we can prove theoretically that a weighted Kullback-Leibler divergence measure can be a common Lyapunov function for the switched system. We show that discretizing the differential equation by using the first-order approximation (Euler's method) based on the geometric multiplicative calculus leads to the same iterative formula of the BI-MART with the scaling parameter as a time-step of numerical discretization. The present paper is the first to reveal that a kind of iterative image reconstruction algorithm is constructed by the discretization of a continuous-time dynamical system for solving tomographic inverse problems. Iterative algorithms with not only the Euler method but also the Runge-Kutta methods of lower-orders applied for discretizing the continuous-time system can be used for image reconstruction. A numerical example showing the characteristics of the discretized iterative methods is presented.

  2. Wavelet denoising during optical coherence tomography of the prostate nerves using the complex wavelet transform.

    PubMed

    Chitchian, Shahab; Fiddy, Michael; Fried, Nathaniel M

    2008-01-01

    Preservation of the cavernous nerves during prostate cancer surgery is critical in preserving sexual function after surgery. Optical coherence tomography (OCT) of the prostate nerves has recently been studied for potential use in nerve-sparing prostate surgery. In this study, the discrete wavelet transform and complex dual-tree wavelet transform are implemented for wavelet shrinkage denoising in OCT images of the rat prostate. Applying the complex dual-tree wavelet transform provides improved results for speckle noise reduction in the OCT prostate image. Image quality metrics of the cavernous nerves and signal-to-noise ratio (SNR) were improved significantly using this complex wavelet denoising technique.

  3. Dependent scattering and absorption by densely packed discrete spherical particles: Effects of complex refractive index

    NASA Astrophysics Data System (ADS)

    Ma, L. X.; Tan, J. Y.; Zhao, J. M.; Wang, F. Q.; Wang, C. A.; Wang, Y. Y.

    2017-07-01

    Due to the dependent scattering and absorption effects, the radiative transfer equation (RTE) may not be suitable for dealing with radiative transfer in dense discrete random media. This paper continues previous research on multiple and dependent scattering in densely packed discrete particle systems, and puts emphasis on the effects of particle complex refractive index. The Mueller matrix elements of the scattering system with different complex refractive indexes are obtained by both electromagnetic method and radiative transfer method. The Maxwell equations are directly solved based on the superposition T-matrix method, while the RTE is solved by the Monte Carlo method combined with the hard sphere model in the Percus-Yevick approximation (HSPYA) to consider the dependent scattering effects. The results show that for densely packed discrete random media composed of medium size parameter particles (equals 6.964 in this study), the demarcation line between independent and dependent scattering has remarkable connections with the particle complex refractive index. With the particle volume fraction increase to a certain value, densely packed discrete particles with higher refractive index contrasts between the particles and host medium and higher particle absorption indexes are more likely to show stronger dependent characteristics. Due to the failure of the extended Rayleigh-Debye scattering condition, the HSPYA has weak effect on the dependent scattering correction at large phase shift parameters.

  4. Multipurpose image watermarking algorithm based on multistage vector quantization.

    PubMed

    Lu, Zhe-Ming; Xu, Dian-Guo; Sun, Sheng-He

    2005-06-01

    The rapid growth of digital multimedia and Internet technologies has made copyright protection, copy protection, and integrity verification three important issues in the digital world. To solve these problems, the digital watermarking technique has been presented and widely researched. Traditional watermarking algorithms are mostly based on discrete transform domains, such as the discrete cosine transform, discrete Fourier transform (DFT), and discrete wavelet transform (DWT). Most of these algorithms are good for only one purpose. Recently, some multipurpose digital watermarking methods have been presented, which can achieve the goal of content authentication and copyright protection simultaneously. However, they are based on DWT or DFT. Lately, several robust watermarking schemes based on vector quantization (VQ) have been presented, but they can only be used for copyright protection. In this paper, we present a novel multipurpose digital image watermarking method based on the multistage vector quantizer structure, which can be applied to image authentication and copyright protection. In the proposed method, the semi-fragile watermark and the robust watermark are embedded in different VQ stages using different techniques, and both of them can be extracted without the original image. Simulation results demonstrate the effectiveness of our algorithm in terms of robustness and fragility.

  5. Multispectral code excited linear prediction coding and its application in magnetic resonance images.

    PubMed

    Hu, J H; Wang, Y; Cahill, P T

    1997-01-01

    This paper reports a multispectral code excited linear prediction (MCELP) method for the compression of multispectral images. Different linear prediction models and adaptation schemes have been compared. The method that uses a forward adaptive autoregressive (AR) model has been proven to achieve a good compromise between performance, complexity, and robustness. This approach is referred to as the MFCELP method. Given a set of multispectral images, the linear predictive coefficients are updated over nonoverlapping three-dimensional (3-D) macroblocks. Each macroblock is further divided into several 3-D micro-blocks, and the best excitation signal for each microblock is determined through an analysis-by-synthesis procedure. The MFCELP method has been applied to multispectral magnetic resonance (MR) images. To satisfy the high quality requirement for medical images, the error between the original image set and the synthesized one is further specified using a vector quantizer. This method has been applied to images from 26 clinical MR neuro studies (20 slices/study, three spectral bands/slice, 256x256 pixels/band, 12 b/pixel). The MFCELP method provides a significant visual improvement over the discrete cosine transform (DCT) based Joint Photographers Expert Group (JPEG) method, the wavelet transform based embedded zero-tree wavelet (EZW) coding method, and the vector tree (VT) coding method, as well as the multispectral segmented autoregressive moving average (MSARMA) method we developed previously.

  6. A constrained Delaunay discretization method for adaptively meshing highly discontinuous geological media

    NASA Astrophysics Data System (ADS)

    Wang, Yang; Ma, Guowei; Ren, Feng; Li, Tuo

    2017-12-01

    A constrained Delaunay discretization method is developed to generate high-quality doubly adaptive meshes of highly discontinuous geological media. Complex features such as three-dimensional discrete fracture networks (DFNs), tunnels, shafts, slopes, boreholes, water curtains, and drainage systems are taken into account in the mesh generation. The constrained Delaunay triangulation method is used to create adaptive triangular elements on planar fractures. Persson's algorithm (Persson, 2005), based on an analogy between triangular elements and spring networks, is enriched to automatically discretize a planar fracture into mesh points with varying density and smooth-quality gradient. The triangulated planar fractures are treated as planar straight-line graphs (PSLGs) to construct piecewise-linear complex (PLC) for constrained Delaunay tetrahedralization. This guarantees the doubly adaptive characteristic of the resulted mesh: the mesh is adaptive not only along fractures but also in space. The quality of elements is compared with the results from an existing method. It is verified that the present method can generate smoother elements and a better distribution of element aspect ratios. Two numerical simulations are implemented to demonstrate that the present method can be applied to various simulations of complex geological media that contain a large number of discontinuities.

  7. A new stationary gridline artifact suppression method based on the 2D discrete wavelet transform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Hui, E-mail: corinna@seu.edu.cn; Key Laboratory of Computer Network and Information Integration; Centre de Recherche en Information Biomédicale sino-français, Laboratoire International Associé, Inserm, Université de Rennes 1, Rennes 35000

    2015-04-15

    Purpose: In digital x-ray radiography, an antiscatter grid is inserted between the patient and the image receptor to reduce scattered radiation. If the antiscatter grid is used in a stationary way, gridline artifacts will appear in the final image. In most of the gridline removal image processing methods, the useful information with spatial frequencies close to that of the gridline is usually lost or degraded. In this study, a new stationary gridline suppression method is designed to preserve more of the useful information. Methods: The method is as follows. The input image is first recursively decomposed into several smaller subimagesmore » using a multiscale 2D discrete wavelet transform. The decomposition process stops when the gridline signal is found to be greater than a threshold in one or several of these subimages using a gridline detection module. An automatic Gaussian band-stop filter is then applied to the detected subimages to remove the gridline signal. Finally, the restored image is achieved using the corresponding 2D inverse discrete wavelet transform. Results: The processed images show that the proposed method can remove the gridline signal efficiently while maintaining the image details. The spectra of a 1D Fourier transform of the processed images demonstrate that, compared with some existing gridline removal methods, the proposed method has better information preservation after the removal of the gridline artifacts. Additionally, the performance speed is relatively high. Conclusions: The experimental results demonstrate the efficiency of the proposed method. Compared with some existing gridline removal methods, the proposed method can preserve more information within an acceptable execution time.« less

  8. Cell shape characterization and classification with discrete Fourier transforms and self-organizing maps.

    PubMed

    Kriegel, Fabian L; Köhler, Ralf; Bayat-Sarmadi, Jannike; Bayerl, Simon; Hauser, Anja E; Niesner, Raluca; Luch, Andreas; Cseresnyes, Zoltan

    2018-03-01

    Cells in their natural environment often exhibit complex kinetic behavior and radical adjustments of their shapes. This enables them to accommodate to short- and long-term changes in their surroundings under physiological and pathological conditions. Intravital multi-photon microscopy is a powerful tool to record this complex behavior. Traditionally, cell behavior is characterized by tracking the cells' movements, which yields numerous parameters describing the spatiotemporal characteristics of cells. Cells can be classified according to their tracking behavior using all or a subset of these kinetic parameters. This categorization can be supported by the a priori knowledge of experts. While such an approach provides an excellent starting point for analyzing complex intravital imaging data, faster methods are required for automated and unbiased characterization. In addition to their kinetic behavior, the 3D shape of these cells also provide essential clues about the cells' status and functionality. New approaches that include the study of cell shapes as well may also allow the discovery of correlations amongst the track- and shape-describing parameters. In the current study, we examine the applicability of a set of Fourier components produced by Discrete Fourier Transform (DFT) as a tool for more efficient and less biased classification of complex cell shapes. By carrying out a number of 3D-to-2D projections of surface-rendered cells, the applied method reduces the more complex 3D shape characterization to a series of 2D DFTs. The resulting shape factors are used to train a Self-Organizing Map (SOM), which provides an unbiased estimate for the best clustering of the data, thereby characterizing groups of cells according to their shape. We propose and demonstrate that such shape characterization is a powerful addition to, or a replacement for kinetic analysis. This would make it especially useful in situations where live kinetic imaging is less practical or not possible at all. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.

  9. Dynamic Imaging by Fluorescence Correlation Spectroscopy Identifies Diverse Populations of Polyglutamine Oligomers Formed in Vivo*

    PubMed Central

    Beam, Monica; Silva, M. Catarina; Morimoto, Richard I.

    2012-01-01

    Protein misfolding and aggregation are exacerbated by aging and diseases of protein conformation including neurodegeneration, metabolic diseases, and cancer. In the cellular environment, aggregates can exist as discrete entities, or heterogeneous complexes of diverse solubility and conformational state. In this study, we have examined the in vivo dynamics of aggregation using imaging methods including fluorescence microscopy, fluorescence recovery after photobleaching (FRAP), and fluorescence correlation spectroscopy (FCS), to monitor the diverse biophysical states of expanded polyglutamine (polyQ) proteins expressed in Caenorhabditis elegans. We show that monomers, oligomers and aggregates co-exist at different concentrations in young and aged animals expressing different polyQ-lengths. During aging, when aggregation and toxicity are exacerbated, FCS-based burst analysis and purified single molecule FCS detected a populational shift toward an increase in the frequency of brighter and larger oligomeric species. Regardless of age or polyQ-length, oligomers were maintained in a heterogeneous distribution that spans multiple orders of magnitude in brightness. We employed genetic suppressors that prevent polyQ aggregation and observed a reduction in visible immobile species with the persistence of heterogeneous oligomers, yet our analysis did not detect the appearance of any discrete oligomeric states associated with toxicity. These studies reveal that the reversible transition from monomers to immobile aggregates is not represented by discrete oligomeric states, but rather suggests that the process of aggregation involves a more complex pattern of molecular interactions of diverse intermediate species that can appear in vivo and contribute to aggregate formation and toxicity. PMID:22669943

  10. Tracking vortices in superconductors: Extracting singularities from a discretized complex scalar field evolving in time

    DOE PAGES

    Phillips, Carolyn L.; Guo, Hanqi; Peterka, Tom; ...

    2016-02-19

    In type-II superconductors, the dynamics of magnetic flux vortices determine their transport properties. In the Ginzburg-Landau theory, vortices correspond to topological defects in the complex order parameter field. Earlier, we introduced a method for extracting vortices from the discretized complex order parameter field generated by a large-scale simulation of vortex matter. With this method, at a fixed time step, each vortex [simplistically, a one-dimensional (1D) curve in 3D space] can be represented as a connected graph extracted from the discretized field. Here we extend this method as a function of time as well. A vortex now corresponds to a 2Dmore » space-time sheet embedded in 4D space time that can be represented as a connected graph extracted from the discretized field over both space and time. Vortices that interact by merging or splitting correspond to disappearance and appearance of holes in the connected graph in the time direction. This method of tracking vortices, which makes no assumptions about the scale or behavior of the vortices, can track the vortices with a resolution as good as the discretization of the temporally evolving complex scalar field. In addition, even details of the trajectory between time steps can be reconstructed from the connected graph. With this form of vortex tracking, the details of vortex dynamics in a model of a superconducting materials can be understood in greater detail than previously possible.« less

  11. New preconditioning strategy for Jacobian-free solvers for variably saturated flows with Richards’ equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lipnikov, Konstantin; Moulton, David; Svyatskiy, Daniil

    2016-04-29

    We develop a new approach for solving the nonlinear Richards’ equation arising in variably saturated flow modeling. The growing complexity of geometric models for simulation of subsurface flows leads to the necessity of using unstructured meshes and advanced discretization methods. Typically, a numerical solution is obtained by first discretizing PDEs and then solving the resulting system of nonlinear discrete equations with a Newton-Raphson-type method. Efficiency and robustness of the existing solvers rely on many factors, including an empiric quality control of intermediate iterates, complexity of the employed discretization method and a customized preconditioner. We propose and analyze a new preconditioningmore » strategy that is based on a stable discretization of the continuum Jacobian. We will show with numerical experiments for challenging problems in subsurface hydrology that this new preconditioner improves convergence of the existing Jacobian-free solvers 3-20 times. Furthermore, we show that the Picard method with this preconditioner becomes a more efficient nonlinear solver than a few widely used Jacobian-free solvers.« less

  12. Color image enhancement of medical images using alpha-rooting and zonal alpha-rooting methods on 2D QDFT

    NASA Astrophysics Data System (ADS)

    Grigoryan, Artyom M.; John, Aparna; Agaian, Sos S.

    2017-03-01

    2-D quaternion discrete Fourier transform (2-D QDFT) is the Fourier transform applied to color images when the color images are considered in the quaternion space. The quaternion numbers are four dimensional hyper-complex numbers. Quaternion representation of color image allows us to see the color of the image as a single unit. In quaternion approach of color image enhancement, each color is seen as a vector. This permits us to see the merging effect of the color due to the combination of the primary colors. The color images are used to be processed by applying the respective algorithm onto each channels separately, and then, composing the color image from the processed channels. In this article, the alpha-rooting and zonal alpha-rooting methods are used with the 2-D QDFT. In the alpha-rooting method, the alpha-root of the transformed frequency values of the 2-D QDFT are determined before taking the inverse transform. In the zonal alpha-rooting method, the frequency spectrum of the 2-D QDFT is divided by different zones and the alpha-rooting is applied with different alpha values for different zones. The optimization of the choice of alpha values is done with the genetic algorithm. The visual perception of 3-D medical images is increased by changing the reference gray line.

  13. Endmember extraction from hyperspectral image based on discrete firefly algorithm (EE-DFA)

    NASA Astrophysics Data System (ADS)

    Zhang, Chengye; Qin, Qiming; Zhang, Tianyuan; Sun, Yuanheng; Chen, Chao

    2017-04-01

    This study proposed a novel method to extract endmembers from hyperspectral image based on discrete firefly algorithm (EE-DFA). Endmembers are the input of many spectral unmixing algorithms. Hence, in this paper, endmember extraction from hyperspectral image is regarded as a combinational optimization problem to get best spectral unmixing results, which can be solved by the discrete firefly algorithm. Two series of experiments were conducted on the synthetic hyperspectral datasets with different SNR and the AVIRIS Cuprite dataset, respectively. The experimental results were compared with the endmembers extracted by four popular methods: the sequential maximum angle convex cone (SMACC), N-FINDR, Vertex Component Analysis (VCA), and Minimum Volume Constrained Nonnegative Matrix Factorization (MVC-NMF). What's more, the effect of the parameters in the proposed method was tested on both synthetic hyperspectral datasets and AVIRIS Cuprite dataset, and the recommended parameters setting was proposed. The results in this study demonstrated that the proposed EE-DFA method showed better performance than the existing popular methods. Moreover, EE-DFA is robust under different SNR conditions.

  14. Double image encryption in Fresnel domain using wavelet transform, gyrator transform and spiral phase masks

    NASA Astrophysics Data System (ADS)

    Kumar, Ravi; Bhaduri, Basanta

    2017-06-01

    In this paper, we propose a new technique for double image encryption in the Fresnel domain using wavelet transform (WT), gyrator transform (GT) and spiral phase masks (SPMs). The two input mages are first phase encoded and each of them are then multiplied with SPMs and Fresnel propagated with distances d1 and d2, respectively. The single-level discrete WT is applied to Fresnel propagated complex images to decompose each into sub-band matrices i.e. LL, HL, LH and HH. Further, the sub-band matrices of two complex images are interchanged after modulation with random phase masks (RPMs) and subjected to inverse discrete WT. The resulting images are then both added and subtracted to get intermediate images which are further Fresnel propagated with distances d3 and d4, respectively. These outputs are finally gyrator transformed with the same angle α to get the encrypted images. The proposed technique provides enhanced security in terms of a large set of security keys. The sensitivity of security keys such as SPM parameters, GT angle α, Fresnel propagation distances are investigated. The robustness of the proposed techniques against noise and occlusion attacks are also analysed. The numerical simulation results are shown in support of the validity and effectiveness of the proposed technique.

  15. Review of Image Quality Measures for Solar Imaging

    NASA Astrophysics Data System (ADS)

    Popowicz, Adam; Radlak, Krystian; Bernacki, Krzysztof; Orlov, Valeri

    2017-12-01

    Observations of the solar photosphere from the ground encounter significant problems caused by Earth's turbulent atmosphere. Before image reconstruction techniques can be applied, the frames obtained in the most favorable atmospheric conditions (the so-called lucky frames) have to be carefully selected. However, estimating the quality of images containing complex photospheric structures is not a trivial task, and the standard routines applied in nighttime lucky imaging observations are not applicable. In this paper we evaluate 36 methods dedicated to the assessment of image quality, which were presented in the literature over the past 40 years. We compare their effectiveness on simulated solar observations of both active regions and granulation patches, using reference data obtained by the Solar Optical Telescope on the Hinode satellite. To create images that are affected by a known degree of atmospheric degradation, we employed the random wave vector method, which faithfully models all the seeing characteristics. The results provide useful information about the method performances, depending on the average seeing conditions expressed by the ratio of the telescope's aperture to the Fried parameter, D/r0. The comparison identifies three methods for consideration by observers: Helmli and Scherer's mean, the median filter gradient similarity, and the discrete cosine transform energy ratio. While the first method requires less computational effort and can be used effectively in virtually any atmospheric conditions, the second method shows its superiority at good seeing (D/r0<4). The third method should mainly be considered for the post-processing of strongly blurred images.

  16. Structural-functional lung imaging using a combined CT-EIT and a Discrete Cosine Transformation reconstruction method.

    PubMed

    Schullcke, Benjamin; Gong, Bo; Krueger-Ziolek, Sabine; Soleimani, Manuchehr; Mueller-Lisse, Ullrich; Moeller, Knut

    2016-05-16

    Lung EIT is a functional imaging method that utilizes electrical currents to reconstruct images of conductivity changes inside the thorax. This technique is radiation free and applicable at the bedside, but lacks of spatial resolution compared to morphological imaging methods such as X-ray computed tomography (CT). In this article we describe an approach for EIT image reconstruction using morphologic information obtained from other structural imaging modalities. This leads to recon- structed images of lung ventilation that can easily be superimposed with structural CT or MRI images, which facilitates image interpretation. The approach is based on a Discrete Cosine Transformation (DCT) of an image of the considered transversal thorax slice. The use of DCT enables reduction of the dimensionality of the reconstruction and ensures that only conductivity changes of the lungs are reconstructed and displayed. The DCT based approach is well suited to fuse morphological image information with functional lung imaging at low computational costs. Results on simulated data indicate that this approach preserves the morphological structures of the lungs and avoids blurring of the solution. Images from patient measurements reveal the capabilities of the method and demonstrate benefits in possible applications.

  17. Structural-functional lung imaging using a combined CT-EIT and a Discrete Cosine Transformation reconstruction method

    PubMed Central

    Schullcke, Benjamin; Gong, Bo; Krueger-Ziolek, Sabine; Soleimani, Manuchehr; Mueller-Lisse, Ullrich; Moeller, Knut

    2016-01-01

    Lung EIT is a functional imaging method that utilizes electrical currents to reconstruct images of conductivity changes inside the thorax. This technique is radiation free and applicable at the bedside, but lacks of spatial resolution compared to morphological imaging methods such as X-ray computed tomography (CT). In this article we describe an approach for EIT image reconstruction using morphologic information obtained from other structural imaging modalities. This leads to recon- structed images of lung ventilation that can easily be superimposed with structural CT or MRI images, which facilitates image interpretation. The approach is based on a Discrete Cosine Transformation (DCT) of an image of the considered transversal thorax slice. The use of DCT enables reduction of the dimensionality of the reconstruction and ensures that only conductivity changes of the lungs are reconstructed and displayed. The DCT based approach is well suited to fuse morphological image information with functional lung imaging at low computational costs. Results on simulated data indicate that this approach preserves the morphological structures of the lungs and avoids blurring of the solution. Images from patient measurements reveal the capabilities of the method and demonstrate benefits in possible applications. PMID:27181695

  18. Application of network methods for understanding evolutionary dynamics in discrete habitats.

    PubMed

    Greenbaum, Gili; Fefferman, Nina H

    2017-06-01

    In populations occupying discrete habitat patches, gene flow between habitat patches may form an intricate population structure. In such structures, the evolutionary dynamics resulting from interaction of gene-flow patterns with other evolutionary forces may be exceedingly complex. Several models describing gene flow between discrete habitat patches have been presented in the population-genetics literature; however, these models have usually addressed relatively simple settings of habitable patches and have stopped short of providing general methodologies for addressing nontrivial gene-flow patterns. In the last decades, network theory - a branch of discrete mathematics concerned with complex interactions between discrete elements - has been applied to address several problems in population genetics by modelling gene flow between habitat patches using networks. Here, we present the idea and concepts of modelling complex gene flows in discrete habitats using networks. Our goal is to raise awareness to existing network theory applications in molecular ecology studies, as well as to outline the current and potential contribution of network methods to the understanding of evolutionary dynamics in discrete habitats. We review the main branches of network theory that have been, or that we believe potentially could be, applied to population genetics and molecular ecology research. We address applications to theoretical modelling and to empirical population-genetic studies, and we highlight future directions for extending the integration of network science with molecular ecology. © 2017 John Wiley & Sons Ltd.

  19. The Karhunen-Loeve, discrete cosine, and related transforms obtained via the Hadamard transform. [for data compression

    NASA Technical Reports Server (NTRS)

    Jones, H. W.; Hein, D. N.; Knauer, S. C.

    1978-01-01

    A general class of even/odd transforms is presented that includes the Karhunen-Loeve transform, the discrete cosine transform, the Walsh-Hadamard transform, and other familiar transforms. The more complex even/odd transforms can be computed by combining a simpler even/odd transform with a sparse matrix multiplication. A theoretical performance measure is computed for some even/odd transforms, and two image compression experiments are reported.

  20. Advanced image fusion algorithms for Gamma Knife treatment planning. Evaluation and proposal for clinical use.

    PubMed

    Apostolou, N; Papazoglou, Th; Koutsouris, D

    2006-01-01

    Image fusion is a process of combining information from multiple sensors. It is a useful tool implemented in the treatment planning programme of Gamma Knife Radiosurgery. In this paper we evaluate advanced image fusion algorithms for Matlab platform and head images. We develop nine level grayscale image fusion methods: average, principal component analysis (PCA), discrete wavelet transform (DWT) and Laplacian, filter - subtract - decimate (FSD), contrast, gradient, morphological pyramid and a shift invariant discrete wavelet transform (SIDWT) method in Matlab platform. We test these methods qualitatively and quantitatively. The quantitative criteria we use are the Root Mean Square Error (RMSE), the Mutual Information (MI), the Standard Deviation (STD), the Entropy (H), the Difference Entropy (DH) and the Cross Entropy (CEN). The qualitative are: natural appearance, brilliance contrast, presence of complementary features and enhancement of common features. Finally we make clinically useful suggestions.

  1. Combined MEG-EEG source localisation in patients with sub-acute sclerosing pan-encephalitis.

    PubMed

    Velmurugan, J; Sinha, Sanjib; Nagappa, Madhu; Mariyappa, N; Bindu, P S; Ravi, G S; Hazra, Nandita; Thennarasu, K; Ravi, V; Taly, A B; Satishchandra, P

    2016-08-01

    To study the genesis and propagation patterns of periodic complexes (PCs) associated with myoclonic jerks in sub-acute sclerosing pan-encephalitis (SSPE) using magnetoencephalography (MEG) and electroencephalography (EEG). Simultaneous recording of MEG (306 channels) and EEG (64 channels) in five patients of SSPE (M:F = 3:2; age 10.8 ± 3.2 years; symptom-duration 6.2 ± 10 months) was carried out using Elekta Neuromag(®) TRIUX™ system. Qualitative analysis of 80-160 PCs per patient was performed. Ten isomorphic classical PCs with significant field topography per patient were analysed at the 'onset' and at 'earliest significant peak' of the burst using discrete and distributed source imaging methods. MEG background was asymmetrical in 2 and slow in 3 patients. Complexes were periodic (3) or quasi-periodic (2), occurring every 4-16 s and varied in morphology among patients. Mean source localization at onset of bursts using discrete and distributed source imaging in magnetic source imaging (MSI) was in thalami and or insula (50 and 50 %, respectively) and in electric source imaging (ESI) was also in thalami and or insula (38 and 46 %, respectively). Mean source localization at the earliest rising phase of peak in MSI was in peri-central gyrus (49 and 42 %) and in ESI it was in frontal cortex (52 and 56 %). Further analysis revealed that PCs were generated in thalami and or insula and thereafter propagated to anterolateral surface of the cortices (viz. sensori-motor cortex and frontal cortex) to same side as that of the onset. This novel MEG-EEG based case series of PCs provides newer insights for understanding the plausible generators of myoclonus in SSPE and patterns of their propagation.

  2. Concurrent Tumor Segmentation and Registration with Uncertainty-based Sparse non-Uniform Graphs

    PubMed Central

    Parisot, Sarah; Wells, William; Chemouny, Stéphane; Duffau, Hugues; Paragios, Nikos

    2014-01-01

    In this paper, we present a graph-based concurrent brain tumor segmentation and atlas to diseased patient registration framework. Both segmentation and registration problems are modeled using a unified pairwise discrete Markov Random Field model on a sparse grid superimposed to the image domain. Segmentation is addressed based on pattern classification techniques, while registration is performed by maximizing the similarity between volumes and is modular with respect to the matching criterion. The two problems are coupled by relaxing the registration term in the tumor area, corresponding to areas of high classification score and high dissimilarity between volumes. In order to overcome the main shortcomings of discrete approaches regarding appropriate sampling of the solution space as well as important memory requirements, content driven samplings of the discrete displacement set and the sparse grid are considered, based on the local segmentation and registration uncertainties recovered by the min marginal energies. State of the art results on a substantial low-grade glioma database demonstrate the potential of our method, while our proposed approach shows maintained performance and strongly reduced complexity of the model. PMID:24717540

  3. Research of generalized wavelet transformations of Haar correctness in remote sensing of the Earth

    NASA Astrophysics Data System (ADS)

    Kazaryan, Maretta; Shakhramanyan, Mihail; Nedkov, Roumen; Richter, Andrey; Borisova, Denitsa; Stankova, Nataliya; Ivanova, Iva; Zaharinova, Mariana

    2017-10-01

    In this paper, Haar's generalized wavelet functions are applied to the problem of ecological monitoring by the method of remote sensing of the Earth. We study generalized Haar wavelet series and suggest the use of Tikhonov's regularization method for investigating them for correctness. In the solution of this problem, an important role is played by classes of functions that were introduced and described in detail by I.M. Sobol for studying multidimensional quadrature formulas and it contains functions with rapidly convergent series of wavelet Haar. A theorem on the stability and uniform convergence of the regularized summation function of the generalized wavelet-Haar series of a function from this class with approximate coefficients is proved. The article also examines the problem of using orthogonal transformations in Earth remote sensing technologies for environmental monitoring. Remote sensing of the Earth allows to receive from spacecrafts information of medium, high spatial resolution and to conduct hyperspectral measurements. Spacecrafts have tens or hundreds of spectral channels. To process the images, the device of discrete orthogonal transforms, and namely, wavelet transforms, was used. The aim of the work is to apply the regularization method in one of the problems associated with remote sensing of the Earth and subsequently to process the satellite images through discrete orthogonal transformations, in particular, generalized Haar wavelet transforms. General methods of research. In this paper, Tikhonov's regularization method, the elements of mathematical analysis, the theory of discrete orthogonal transformations, and methods for decoding of satellite images are used. Scientific novelty. The task of processing of archival satellite snapshots (images), in particular, signal filtering, was investigated from the point of view of an incorrectly posed problem. The regularization parameters for discrete orthogonal transformations were determined.

  4. A Semi-Discrete Landweber-Kaczmarz Method for Cone Beam Tomography and Laminography Exploiting Geometric Prior Information

    NASA Astrophysics Data System (ADS)

    Vogelgesang, Jonas; Schorr, Christian

    2016-12-01

    We present a semi-discrete Landweber-Kaczmarz method for solving linear ill-posed problems and its application to Cone Beam tomography and laminography. Using a basis function-type discretization in the image domain, we derive a semi-discrete model of the underlying scanning system. Based on this model, the proposed method provides an approximate solution of the reconstruction problem, i.e. reconstructing the density function of a given object from its projections, in suitable subspaces equipped with basis function-dependent weights. This approach intuitively allows the incorporation of additional information about the inspected object leading to a more accurate model of the X-rays through the object. Also, physical conditions of the scanning geometry, like flat detectors in computerized tomography as used in non-destructive testing applications as well as non-regular scanning curves e.g. appearing in computed laminography (CL) applications, are directly taken into account during the modeling process. Finally, numerical experiments of a typical CL application in three dimensions are provided to verify the proposed method. The introduction of geometric prior information leads to a significantly increased image quality and superior reconstructions compared to standard iterative methods.

  5. Parameter Estimation for the Blind Restoration of Blurred Imagery.

    DTIC Science & Technology

    1986-09-01

    17 Noise Process .... ............. 23 Restoration Methods .... .......... 26 Inverse Filter .... ........... 26 Wiener Filter...of Eq. (155) ....... .................... ... 64 Table 2 Restored Pictures and Noise Variances ........ . 69 v 5𔃼 5- viq °,. r -’ .’S’ .N’% N...restoration system. g(x,y) Degraded image. G(u,v) Discrete Fourier Transform of the degraded image. n(x,y) Noise . N(u,v) Discrete Fourier transform of n

  6. Enhanced Imaging of Corrosion in Aircraft Structures with Reverse Geometry X-ray(registered tm)

    NASA Technical Reports Server (NTRS)

    Winfree, William P.; Cmar-Mascis, Noreen A.; Parker, F. Raymond

    2000-01-01

    The application of Reverse Geometry X-ray to the detection and characterization of corrosion in aircraft structures is presented. Reverse Geometry X-ray is a unique system that utilizes an electronically scanned x-ray source and a discrete detector for real time radiographic imaging of a structure. The scanned source system has several advantages when compared to conventional radiography. First, the discrete x-ray detector can be miniaturized and easily positioned inside a complex structure (such as an aircraft wing) enabling images of each surface of the structure to be obtained separately. Second, using a measurement configuration with multiple detectors enables the simultaneous acquisition of data from several different perspectives without moving the structure or the measurement system. This provides a means for locating the position of flaws and enhances separation of features at the surface from features inside the structure. Data is presented on aircraft specimens with corrosion in the lap joint. Advanced laminographic imaging techniques utilizing data from multiple detectors are demonstrated to be capable of separating surface features from corrosion in the lap joint and locating the corrosion in multilayer structures. Results of this technique are compared to computed tomography cross sections obtained from a microfocus x-ray tomography system. A method is presented for calibration of the detectors of the Reverse Geometry X-ray system to enable quantification of the corrosion to within 2%.

  7. Speckle noise reduction in ultrasound images using a discrete wavelet transform-based image fusion technique.

    PubMed

    Choi, Hyun Ho; Lee, Ju Hwan; Kim, Sung Min; Park, Sung Yun

    2015-01-01

    Here, the speckle noise in ultrasonic images is removed using an image fusion-based denoising method. To optimize the denoising performance, each discrete wavelet transform (DWT) and filtering technique was analyzed and compared. In addition, the performances were compared in order to derive the optimal input conditions. To evaluate the speckle noise removal performance, an image fusion algorithm was applied to the ultrasound images, and comparatively analyzed with the original image without the algorithm. As a result, applying DWT and filtering techniques caused information loss and noise characteristics, and did not represent the most significant noise reduction performance. Conversely, an image fusion method applying SRAD-original conditions preserved the key information in the original image, and the speckle noise was removed. Based on such characteristics, the input conditions of SRAD-original had the best denoising performance with the ultrasound images. From this study, the best denoising technique proposed based on the results was confirmed to have a high potential for clinical application.

  8. Study on the algorithm of computational ghost imaging based on discrete fourier transform measurement matrix

    NASA Astrophysics Data System (ADS)

    Zhang, Leihong; Liang, Dong; Li, Bei; Kang, Yi; Pan, Zilan; Zhang, Dawei; Gao, Xiumin; Ma, Xiuhua

    2016-07-01

    On the basis of analyzing the cosine light field with determined analytic expression and the pseudo-inverse method, the object is illuminated by a presetting light field with a determined discrete Fourier transform measurement matrix, and the object image is reconstructed by the pseudo-inverse method. The analytic expression of the algorithm of computational ghost imaging based on discrete Fourier transform measurement matrix is deduced theoretically, and compared with the algorithm of compressive computational ghost imaging based on random measurement matrix. The reconstruction process and the reconstruction error are analyzed. On this basis, the simulation is done to verify the theoretical analysis. When the sampling measurement number is similar to the number of object pixel, the rank of discrete Fourier transform matrix is the same as the one of the random measurement matrix, the PSNR of the reconstruction image of FGI algorithm and PGI algorithm are similar, the reconstruction error of the traditional CGI algorithm is lower than that of reconstruction image based on FGI algorithm and PGI algorithm. As the decreasing of the number of sampling measurement, the PSNR of reconstruction image based on FGI algorithm decreases slowly, and the PSNR of reconstruction image based on PGI algorithm and CGI algorithm decreases sharply. The reconstruction time of FGI algorithm is lower than that of other algorithms and is not affected by the number of sampling measurement. The FGI algorithm can effectively filter out the random white noise through a low-pass filter and realize the reconstruction denoising which has a higher denoising capability than that of the CGI algorithm. The FGI algorithm can improve the reconstruction accuracy and the reconstruction speed of computational ghost imaging.

  9. Computer-based Learning of Neuroanatomy: A Longitudinal Study of Learning, Transfer, and Retention

    PubMed Central

    Chariker, Julia H.; Naaz, Farah; Pani, John R.

    2013-01-01

    A longitudinal experiment was conducted to evaluate the effectiveness of new methods for learning neuroanatomy with computer-based instruction. Using a 3D graphical model of the human brain, and sections derived from the model, tools for exploring neuroanatomy were developed to encourage adaptive exploration. This is an instructional method which incorporates graphical exploration in the context of repeated testing and feedback. With this approach, 72 participants learned either sectional anatomy alone or whole anatomy followed by sectional anatomy. Sectional anatomy was explored either with perceptually continuous navigation through the sections or with discrete navigation (as in the use of an anatomical atlas). Learning was measured longitudinally to a high performance criterion. Subsequent tests examined transfer of learning to the interpretation of biomedical images and long-term retention. There were several clear results of this study. On initial exposure to neuroanatomy, whole anatomy was learned more efficiently than sectional anatomy. After whole anatomy was mastered, learners demonstrated high levels of transfer of learning to sectional anatomy and from sectional anatomy to the interpretation of complex biomedical images. Learning whole anatomy prior to learning sectional anatomy led to substantially fewer errors overall than learning sectional anatomy alone. Use of continuous or discrete navigation through sectional anatomy made little difference to measured outcomes. Efficient learning, good long-term retention, and successful transfer to the interpretation of biomedical images indicated that computer-based learning using adaptive exploration can be a valuable tool in instruction of neuroanatomy and similar disciplines. PMID:23349552

  10. Seeing mathematics: perceptual experience and brain activity in acquired synesthesia.

    PubMed

    Brogaard, Berit; Vanni, Simo; Silvanto, Juha

    2013-01-01

    We studied the patient JP who has exceptional abilities to draw complex geometrical images by hand and a form of acquired synesthesia for mathematical formulas and objects, which he perceives as geometrical figures. JP sees all smooth curvatures as discrete lines, similarly regardless of scale. We carried out two preliminary investigations to establish the perceptual nature of synesthetic experience and to investigate the neural basis of this phenomenon. In a functional magnetic resonance imaging (fMRI) study, image-inducing formulas produced larger fMRI responses than non-image inducing formulas in the left temporal, parietal and frontal lobes. Thus our main finding is that the activation associated with his experience of complex geometrical images emerging from mathematical formulas is restricted to the left hemisphere.

  11. Comparison of Node-Centered and Cell-Centered Unstructured Finite-Volume Discretizations: Inviscid Fluxes

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.

    2010-01-01

    Cell-centered and node-centered approaches have been compared for unstructured finite-volume discretization of inviscid fluxes. The grids range from regular grids to irregular grids, including mixed-element grids and grids with random perturbations of nodes. Accuracy, complexity, and convergence rates of defect-correction iterations are studied for eight nominally second-order accurate schemes: two node-centered schemes with weighted and unweighted least-squares (LSQ) methods for gradient reconstruction and six cell-centered schemes two node-averaging with and without clipping and four schemes that employ different stencils for LSQ gradient reconstruction. The cell-centered nearest-neighbor (CC-NN) scheme has the lowest complexity; a version of the scheme that involves smart augmentation of the LSQ stencil (CC-SA) has only marginal complexity increase. All other schemes have larger complexity; complexity of node-centered (NC) schemes are somewhat lower than complexity of cell-centered node-averaging (CC-NA) and full-augmentation (CC-FA) schemes. On highly anisotropic grids typical of those encountered in grid adaptation, discretization errors of five of the six cell-centered schemes converge with second order on all tested grids; the CC-NA scheme with clipping degrades solution accuracy to first order. The NC schemes converge with second order on regular and/or triangular grids and with first order on perturbed quadrilaterals and mixed-element grids. All schemes may produce large relative errors in gradient reconstruction on grids with perturbed nodes. Defect-correction iterations for schemes employing weighted least-square gradient reconstruction diverge on perturbed stretched grids. Overall, the CC-NN and CC-SA schemes offer the best options of the lowest complexity and secondorder discretization errors. On anisotropic grids over a curved body typical of turbulent flow simulations, the discretization errors converge with second order and are small for the CC-NN, CC-SA, and CC-FA schemes on all grids and for NC schemes on triangular grids; the discretization errors of the CC-NA scheme without clipping do not converge on irregular grids. Accurate gradient reconstruction can be achieved by introducing a local approximate mapping; without approximate mapping, only the NC scheme with weighted LSQ method provides accurate gradients. Defect correction iterations for the CC-NA scheme without clipping diverge; for the NC scheme with weighted LSQ method, the iterations either diverge or converge very slowly. The best option in curved geometries is the CC-SA scheme that offers low complexity, second-order discretization errors, and fast convergence.

  12. Initial evaluation of discrete orthogonal basis reconstruction of ECT images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moody, E.B.; Donohue, K.D.

    1996-12-31

    Discrete orthogonal basis restoration (DOBR) is a linear, non-iterative, and robust method for solving inverse problems for systems characterized by shift-variant transfer functions. This simulation study evaluates the feasibility of using DOBR for reconstructing emission computed tomographic (ECT) images. The imaging system model uses typical SPECT parameters and incorporates the effects of attenuation, spatially-variant PSF, and Poisson noise in the projection process. Sample reconstructions and statistical error analyses for a class of digital phantoms compare the DOBR performance for Hartley and Walsh basis functions. Test results confirm that DOBR with either basis set produces images with good statistical properties. Nomore » problems were encountered with reconstruction instability. The flexibility of the DOBR method and its consistent performance warrants further investigation of DOBR as a means of ECT image reconstruction.« less

  13. Extracting, Tracking, and Visualizing Magnetic Flux Vortices in 3D Complex-Valued Superconductor Simulation Data.

    PubMed

    Guo, Hanqi; Phillips, Carolyn L; Peterka, Tom; Karpeyev, Dmitry; Glatz, Andreas

    2016-01-01

    We propose a method for the vortex extraction and tracking of superconducting magnetic flux vortices for both structured and unstructured mesh data. In the Ginzburg-Landau theory, magnetic flux vortices are well-defined features in a complex-valued order parameter field, and their dynamics determine electromagnetic properties in type-II superconductors. Our method represents each vortex line (a 1D curve embedded in 3D space) as a connected graph extracted from the discretized field in both space and time. For a time-varying discrete dataset, our vortex extraction and tracking method is as accurate as the data discretization. We then apply 3D visualization and 2D event diagrams to the extraction and tracking results to help scientists understand vortex dynamics and macroscale superconductor behavior in greater detail than previously possible.

  14. Modeling loosely annotated images using both given and imagined annotations

    NASA Astrophysics Data System (ADS)

    Tang, Hong; Boujemaa, Nozha; Chen, Yunhao; Deng, Lei

    2011-12-01

    In this paper, we present an approach to learn latent semantic analysis models from loosely annotated images for automatic image annotation and indexing. The given annotation in training images is loose due to: 1. ambiguous correspondences between visual features and annotated keywords; 2. incomplete lists of annotated keywords. The second reason motivates us to enrich the incomplete annotation in a simple way before learning a topic model. In particular, some ``imagined'' keywords are poured into the incomplete annotation through measuring similarity between keywords in terms of their co-occurrence. Then, both given and imagined annotations are employed to learn probabilistic topic models for automatically annotating new images. We conduct experiments on two image databases (i.e., Corel and ESP) coupled with their loose annotations, and compare the proposed method with state-of-the-art discrete annotation methods. The proposed method improves word-driven probability latent semantic analysis (PLSA-words) up to a comparable performance with the best discrete annotation method, while a merit of PLSA-words is still kept, i.e., a wider semantic range.

  15. Continuum Level Density in Complex Scaling Method

    NASA Astrophysics Data System (ADS)

    Suzuki, R.; Myo, T.; Katō, K.

    2005-11-01

    A new calculational method of continuum level density (CLD) at unbound energies is studied in the complex scaling method (CSM). It is shown that the CLD can be calculated by employing the discretization of continuum states in the CSM without any smoothing technique.

  16. Effects of image charges, interfacial charge discreteness, and surface roughness on the zeta potential of spherical electric double layers.

    PubMed

    Gan, Zecheng; Xing, Xiangjun; Xu, Zhenli

    2012-07-21

    We investigate the effects of image charges, interfacial charge discreteness, and surface roughness on spherical electric double layer structures in electrolyte solutions with divalent counterions in the setting of the primitive model. By using Monte Carlo simulations and the image charge method, the zeta potential profile and the integrated charge distribution function are computed for varying surface charge strengths and salt concentrations. Systematic comparisons were carried out between three distinct models for interfacial charges: (1) SURF1 with uniform surface charges, (2) SURF2 with discrete point charges on the interface, and (3) SURF3 with discrete interfacial charges and finite excluded volume. By comparing the integrated charge distribution function and the zeta potential profile, we argue that the potential at the distance of one ion diameter from the macroion surface is a suitable location to define the zeta potential. In SURF2 model, we find that image charge effects strongly enhance charge inversion for monovalent interfacial charges, and strongly suppress charge inversion for multivalent interfacial charges. For SURF3, the image charge effect becomes much smaller. Finally, with image charges in action, we find that excluded volumes (in SURF3) suppress charge inversion for monovalent interfacial charges and enhance charge inversion for multivalent interfacial charges. Overall, our results demonstrate that all these aspects, i.e., image charges, interfacial charge discreteness, their excluding volumes, have significant impacts on zeta potentials of electric double layers.

  17. Computation of Symmetric Discrete Cosine Transform Using Bakhvalov's Algorithm

    NASA Technical Reports Server (NTRS)

    Aburdene, Maurice F.; Strojny, Brian C.; Dorband, John E.

    2005-01-01

    A number of algorithms for recursive computation of the discrete cosine transform (DCT) have been developed recently. This paper presents a new method for computing the discrete cosine transform and its inverse using Bakhvalov's algorithm, a method developed for evaluation of a polynomial at a point. In this paper, we will focus on both the application of the algorithm to the computation of the DCT-I and its complexity. In addition, Bakhvalov s algorithm is compared with Clenshaw s algorithm for the computation of the DCT.

  18. A Numerical and Theoretical Study of Seismic Wave Diffraction in Complex Geologic Structure

    DTIC Science & Technology

    1989-04-14

    element methods for analyzing linear and nonlinear seismic effects in the surficial geologies relevant to several Air Force missions. The second...exact solution evaluated here indicates that edge-diffracted seismic wave fields calculated by discrete numerical methods probably exhibits significant...study is to demonstrate and validate some discrete numerical methods essential for analyzing linear and nonlinear seismic effects in the surficial

  19. Registration of segmented histological images using thin plate splines and belief propagation

    NASA Astrophysics Data System (ADS)

    Kybic, Jan

    2014-03-01

    We register images based on their multiclass segmentations, for cases when correspondence of local features cannot be established. A discrete mutual information is used as a similarity criterion. It is evaluated at a sparse set of location on the interfaces between classes. A thin-plate spline regularization is approximated by pairwise interactions. The problem is cast into a discrete setting and solved efficiently by belief propagation. Further speedup and robustness is provided by a multiresolution framework. Preliminary experiments suggest that our method can provide similar registration quality to standard methods at a fraction of the computational cost.

  20. Nonconforming mortar element methods: Application to spectral discretizations

    NASA Technical Reports Server (NTRS)

    Maday, Yvon; Mavriplis, Cathy; Patera, Anthony

    1988-01-01

    Spectral element methods are p-type weighted residual techniques for partial differential equations that combine the generality of finite element methods with the accuracy of spectral methods. Presented here is a new nonconforming discretization which greatly improves the flexibility of the spectral element approach as regards automatic mesh generation and non-propagating local mesh refinement. The method is based on the introduction of an auxiliary mortar trace space, and constitutes a new approach to discretization-driven domain decomposition characterized by a clean decoupling of the local, structure-preserving residual evaluations and the transmission of boundary and continuity conditions. The flexibility of the mortar method is illustrated by several nonconforming adaptive Navier-Stokes calculations in complex geometry.

  1. 3D GGO candidate extraction in lung CT images using multilevel thresholding on supervoxels

    NASA Astrophysics Data System (ADS)

    Huang, Shan; Liu, Xiabi; Han, Guanghui; Zhao, Xinming; Zhao, Yanfeng; Zhou, Chunwu

    2018-02-01

    The earlier detection of ground glass opacity (GGO) is of great importance since GGOs are more likely to be malignant than solid nodules. However, the detection of GGO is a difficult task in lung cancer screening. This paper proposes a novel GGO candidate extraction method, which performs multilevel thresholding on supervoxels in 3D lung CT images. Firstly, we segment the lung parenchyma based on Otsu algorithm. Secondly, the voxels which are adjacent in 3D discrete space and sharing similar grayscale are clustered into supervoxels. This procedure is used to enhance GGOs and reduce computational complexity. Thirdly, Hessian matrix is used to emphasize focal GGO candidates. Lastly, an improved adaptive multilevel thresholding method is applied on segmented clusters to extract GGO candidates. The proposed method was evaluated on a set of 19 lung CT scans containing 166 GGO lesions from the Lung CT Imaging Signs (LISS) database. The experimental results show that our proposed GGO candidate extraction method is effective, with a sensitivity of 100% and 26.3 of false positives per scan (665 GGO candidates, 499 non-GGO regions and 166 GGO regions). It can handle both focal GGOs and diffuse GGOs.

  2. Image processing to optimize wave energy converters

    NASA Astrophysics Data System (ADS)

    Bailey, Kyle Marc-Anthony

    The world is turning to renewable energies as a means of ensuring the planet's future and well-being. There have been a few attempts in the past to utilize wave power as a means of generating electricity through the use of Wave Energy Converters (WEC), but only recently are they becoming a focal point in the renewable energy field. Over the past few years there has been a global drive to advance the efficiency of WEC. Placing a mechanical device either onshore or offshore that captures the energy within ocean surface waves to drive a mechanical device is how wave power is produced. This paper seeks to provide a novel and innovative way to estimate ocean wave frequency through the use of image processing. This will be achieved by applying a complex modulated lapped orthogonal transform filter bank to satellite images of ocean waves. The complex modulated lapped orthogonal transform filterbank provides an equal subband decomposition of the Nyquist bounded discrete time Fourier Transform spectrum. The maximum energy of the 2D complex modulated lapped transform subband is used to determine the horizontal and vertical frequency, which subsequently can be used to determine the wave frequency in the direction of the WEC by a simple trigonometric scaling. The robustness of the proposed method is provided by the applications to simulated and real satellite images where the frequency is known.

  3. Research and implementation of simulation for TDICCD remote sensing in vibration of optical axis

    NASA Astrophysics Data System (ADS)

    Liu, Zhi-hong; Kang, Xiao-jun; Lin, Zhe; Song, Li

    2013-12-01

    During the exposure time, the charge transfer speed in the push-broom direction and the line-by-lines canning speed of the sensor are required to match each other strictly for a space-borne TDICCD push-broom camera. However, as attitude disturbance of satellite and vibration of camera are inevitable, it is impossible to eliminate the speed mismatch, which will make the signal of different targets overlay each other and result in a decline of image resolution. The effects of velocity mismatch will be visually observed and analyzed by simulating the degradation of image quality caused by the vibration of the optical axis, and it is significant for the evaluation of image quality and design of the image restoration algorithm. How to give a model in time domain and space domain during the imaging time is the problem needed to be solved firstly. As vibration information for simulation is usually given by a continuous curve, the pixels of original image matrix and sensor matrix are discrete, as a result, they cannot always match each other well. The effect of simulation will also be influenced by the discrete sampling in integration time. In conclusion, it is quite significant for improving simulation accuracy and efficiency to give an appropriate discrete modeling and simulation method. The paper analyses discretization schemes in time domain and space domain and presents a method to simulate the quality of image of the optical system in the vibration of the line of sight, which is based on the principle of TDICCD sensor. The gray value of pixels in sensor matrix is obtained by a weighted arithmetic, which solves the problem of pixels dismatch. The result which compared with the experiment of hardware test indicate that this simulation system performances well in accuracy and reliability.

  4. An optical Fourier transform coprocessor with direct phase determination.

    PubMed

    Macfaden, Alexander J; Gordon, George S D; Wilkinson, Timothy D

    2017-10-20

    The Fourier transform is a ubiquitous mathematical operation which arises naturally in optics. We propose and demonstrate a practical method to optically evaluate a complex-to-complex discrete Fourier transform. By implementing the Fourier transform optically we can overcome the limiting O(nlogn) complexity of fast Fourier transform algorithms. Efficiently extracting the phase from the well-known optical Fourier transform is challenging. By appropriately decomposing the input and exploiting symmetries of the Fourier transform we are able to determine the phase directly from straightforward intensity measurements, creating an optical Fourier transform with O(n) apparent complexity. Performing larger optical Fourier transforms requires higher resolution spatial light modulators, but the execution time remains unchanged. This method could unlock the potential of the optical Fourier transform to permit 2D complex-to-complex discrete Fourier transforms with a performance that is currently untenable, with applications across information processing and computational physics.

  5. Novel image encryption algorithm based on multiple-parameter discrete fractional random transform

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Dong, Taiji; Wu, Jianhua

    2010-08-01

    A new method of digital image encryption is presented by utilizing a new multiple-parameter discrete fractional random transform. Image encryption and decryption are performed based on the index additivity and multiple parameters of the multiple-parameter fractional random transform. The plaintext and ciphertext are respectively in the spatial domain and in the fractional domain determined by the encryption keys. The proposed algorithm can resist statistic analyses effectively. The computer simulation results show that the proposed encryption algorithm is sensitive to the multiple keys, and that it has considerable robustness, noise immunity and security.

  6. A threshold selection method based on edge preserving

    NASA Astrophysics Data System (ADS)

    Lou, Liantang; Dan, Wei; Chen, Jiaqi

    2015-12-01

    A method of automatic threshold selection for image segmentation is presented. An optimal threshold is selected in order to preserve edge of image perfectly in image segmentation. The shortcoming of Otsu's method based on gray-level histograms is analyzed. The edge energy function of bivariate continuous function is expressed as the line integral while the edge energy function of image is simulated by discretizing the integral. An optimal threshold method by maximizing the edge energy function is given. Several experimental results are also presented to compare with the Otsu's method.

  7. Comments on `Area and power efficient DCT architecture for image compression' by Dhandapani and Ramachandran

    NASA Astrophysics Data System (ADS)

    Cintra, Renato J.; Bayer, Fábio M.

    2017-12-01

    In [Dhandapani and Ramachandran, "Area and power efficient DCT architecture for image compression", EURASIP Journal on Advances in Signal Processing 2014, 2014:180] the authors claim to have introduced an approximation for the discrete cosine transform capable of outperforming several well-known approximations in literature in terms of additive complexity. We could not verify the above results and we offer corrections for their work.

  8. Reprint of Solution of Ambrosio-Tortorelli model for image segmentation by generalized relaxation method

    NASA Astrophysics Data System (ADS)

    D'Ambra, Pasqua; Tartaglione, Gaetano

    2015-04-01

    Image segmentation addresses the problem to partition a given image into its constituent objects and then to identify the boundaries of the objects. This problem can be formulated in terms of a variational model aimed to find optimal approximations of a bounded function by piecewise-smooth functions, minimizing a given functional. The corresponding Euler-Lagrange equations are a set of two coupled elliptic partial differential equations with varying coefficients. Numerical solution of the above system often relies on alternating minimization techniques involving descent methods coupled with explicit or semi-implicit finite-difference discretization schemes, which are slowly convergent and poorly scalable with respect to image size. In this work we focus on generalized relaxation methods also coupled with multigrid linear solvers, when a finite-difference discretization is applied to the Euler-Lagrange equations of Ambrosio-Tortorelli model. We show that non-linear Gauss-Seidel, accelerated by inner linear iterations, is an effective method for large-scale image analysis as those arising from high-throughput screening platforms for stem cells targeted differentiation, where one of the main goal is segmentation of thousand of images to analyze cell colonies morphology.

  9. Solution of Ambrosio-Tortorelli model for image segmentation by generalized relaxation method

    NASA Astrophysics Data System (ADS)

    D'Ambra, Pasqua; Tartaglione, Gaetano

    2015-03-01

    Image segmentation addresses the problem to partition a given image into its constituent objects and then to identify the boundaries of the objects. This problem can be formulated in terms of a variational model aimed to find optimal approximations of a bounded function by piecewise-smooth functions, minimizing a given functional. The corresponding Euler-Lagrange equations are a set of two coupled elliptic partial differential equations with varying coefficients. Numerical solution of the above system often relies on alternating minimization techniques involving descent methods coupled with explicit or semi-implicit finite-difference discretization schemes, which are slowly convergent and poorly scalable with respect to image size. In this work we focus on generalized relaxation methods also coupled with multigrid linear solvers, when a finite-difference discretization is applied to the Euler-Lagrange equations of Ambrosio-Tortorelli model. We show that non-linear Gauss-Seidel, accelerated by inner linear iterations, is an effective method for large-scale image analysis as those arising from high-throughput screening platforms for stem cells targeted differentiation, where one of the main goal is segmentation of thousand of images to analyze cell colonies morphology.

  10. Concurrent tumor segmentation and registration with uncertainty-based sparse non-uniform graphs.

    PubMed

    Parisot, Sarah; Wells, William; Chemouny, Stéphane; Duffau, Hugues; Paragios, Nikos

    2014-05-01

    In this paper, we present a graph-based concurrent brain tumor segmentation and atlas to diseased patient registration framework. Both segmentation and registration problems are modeled using a unified pairwise discrete Markov Random Field model on a sparse grid superimposed to the image domain. Segmentation is addressed based on pattern classification techniques, while registration is performed by maximizing the similarity between volumes and is modular with respect to the matching criterion. The two problems are coupled by relaxing the registration term in the tumor area, corresponding to areas of high classification score and high dissimilarity between volumes. In order to overcome the main shortcomings of discrete approaches regarding appropriate sampling of the solution space as well as important memory requirements, content driven samplings of the discrete displacement set and the sparse grid are considered, based on the local segmentation and registration uncertainties recovered by the min marginal energies. State of the art results on a substantial low-grade glioma database demonstrate the potential of our method, while our proposed approach shows maintained performance and strongly reduced complexity of the model. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Adaptive multifocus image fusion using block compressed sensing with smoothed projected Landweber integration in the wavelet domain.

    PubMed

    V S, Unni; Mishra, Deepak; Subrahmanyam, G R K S

    2016-12-01

    The need for image fusion in current image processing systems is increasing mainly due to the increased number and variety of image acquisition techniques. Image fusion is the process of combining substantial information from several sensors using mathematical techniques in order to create a single composite image that will be more comprehensive and thus more useful for a human operator or other computer vision tasks. This paper presents a new approach to multifocus image fusion based on sparse signal representation. Block-based compressive sensing integrated with a projection-driven compressive sensing (CS) recovery that encourages sparsity in the wavelet domain is used as a method to get the focused image from a set of out-of-focus images. Compression is achieved during the image acquisition process using a block compressive sensing method. An adaptive thresholding technique within the smoothed projected Landweber recovery process reconstructs high-resolution focused images from low-dimensional CS measurements of out-of-focus images. Discrete wavelet transform and dual-tree complex wavelet transform are used as the sparsifying basis for the proposed fusion. The main finding lies in the fact that sparsification enables a better selection of the fusion coefficients and hence better fusion. A Laplacian mixture model fit is done in the wavelet domain and estimation of the probability density function (pdf) parameters by expectation maximization leads us to the proper selection of the coefficients of the fused image. Using the proposed method compared with the fusion scheme without employing the projected Landweber (PL) scheme and the other existing CS-based fusion approaches, it is observed that with fewer samples itself, the proposed method outperforms other approaches.

  12. Significance of the impact of motion compensation on the variability of PET image features

    NASA Astrophysics Data System (ADS)

    Carles, M.; Bach, T.; Torres-Espallardo, I.; Baltas, D.; Nestle, U.; Martí-Bonmatí, L.

    2018-03-01

    In lung cancer, quantification by positron emission tomography/computed tomography (PET/CT) imaging presents challenges due to respiratory movement. Our primary aim was to study the impact of motion compensation implied by retrospectively gated (4D)-PET/CT on the variability of PET quantitative parameters. Its significance was evaluated by comparison with the variability due to (i) the voxel size in image reconstruction and (ii) the voxel size in image post-resampling. The method employed for feature extraction was chosen based on the analysis of (i) the effect of discretization of the standardized uptake value (SUV) on complementarity between texture features (TF) and conventional indices, (ii) the impact of the segmentation method on the variability of image features, and (iii) the variability of image features across the time-frame of 4D-PET. Thirty-one PET-features were involved. Three SUV discretization methods were applied: a constant width (SUV resolution) of the resampling bin (method RW), a constant number of bins (method RN) and RN on the image obtained after histogram equalization (method EqRN). The segmentation approaches evaluated were 40% of SUVmax and the contrast oriented algorithm (COA). Parameters derived from 4D-PET images were compared with values derived from the PET image obtained for (i) the static protocol used in our clinical routine (3D) and (ii) the 3D image post-resampled to the voxel size of the 4D image and PET image derived after modifying the reconstruction of the 3D image to comprise the voxel size of the 4D image. Results showed that TF complementarity with conventional indices was sensitive to the SUV discretization method. In the comparison of COA and 40% contours, despite the values not being interchangeable, all image features showed strong linear correlations (r  >  0.91, p\\ll 0.001 ). Across the time-frames of 4D-PET, all image features followed a normal distribution in most patients. For our patient cohort, the compensation of tumor motion did not have a significant impact on the quantitative PET parameters. The variability of PET parameters due to voxel size in image reconstruction was more significant than variability due to voxel size in image post-resampling. In conclusion, most of the parameters (apart from the contrast of neighborhood matrix) were robust to the motion compensation implied by 4D-PET/CT. The impact on parameter variability due to the voxel size in image reconstruction and in image post-resampling could not be assumed to be equivalent.

  13. Improved discrete swarm intelligence algorithms for endmember extraction from hyperspectral remote sensing images

    NASA Astrophysics Data System (ADS)

    Su, Yuanchao; Sun, Xu; Gao, Lianru; Li, Jun; Zhang, Bing

    2016-10-01

    Endmember extraction is a key step in hyperspectral unmixing. A new endmember extraction framework is proposed for hyperspectral endmember extraction. The proposed approach is based on the swarm intelligence (SI) algorithm, where discretization is used to solve the SI algorithm because pixels in a hyperspectral image are naturally defined within a discrete space. Moreover, a "distance" factor is introduced into the objective function to limit the endmember numbers which is generally limited in real scenarios, while traditional SI algorithms likely produce superabundant spectral signatures, which generally belong to the same classes. Three endmember extraction methods are proposed based on the artificial bee colony, ant colony optimization, and particle swarm optimization algorithms. Experiments with both simulated and real hyperspectral images indicate that the proposed framework can improve the accuracy of endmember extraction.

  14. Methods for spectral image analysis by exploiting spatial simplicity

    DOEpatents

    Keenan, Michael R.

    2010-05-25

    Several full-spectrum imaging techniques have been introduced in recent years that promise to provide rapid and comprehensive chemical characterization of complex samples. One of the remaining obstacles to adopting these techniques for routine use is the difficulty of reducing the vast quantities of raw spectral data to meaningful chemical information. Multivariate factor analysis techniques, such as Principal Component Analysis and Alternating Least Squares-based Multivariate Curve Resolution, have proven effective for extracting the essential chemical information from high dimensional spectral image data sets into a limited number of components that describe the spectral characteristics and spatial distributions of the chemical species comprising the sample. There are many cases, however, in which those constraints are not effective and where alternative approaches may provide new analytical insights. For many cases of practical importance, imaged samples are "simple" in the sense that they consist of relatively discrete chemical phases. That is, at any given location, only one or a few of the chemical species comprising the entire sample have non-zero concentrations. The methods of spectral image analysis of the present invention exploit this simplicity in the spatial domain to make the resulting factor models more realistic. Therefore, more physically accurate and interpretable spectral and abundance components can be extracted from spectral images that have spatially simple structure.

  15. Methods for spectral image analysis by exploiting spatial simplicity

    DOEpatents

    Keenan, Michael R.

    2010-11-23

    Several full-spectrum imaging techniques have been introduced in recent years that promise to provide rapid and comprehensive chemical characterization of complex samples. One of the remaining obstacles to adopting these techniques for routine use is the difficulty of reducing the vast quantities of raw spectral data to meaningful chemical information. Multivariate factor analysis techniques, such as Principal Component Analysis and Alternating Least Squares-based Multivariate Curve Resolution, have proven effective for extracting the essential chemical information from high dimensional spectral image data sets into a limited number of components that describe the spectral characteristics and spatial distributions of the chemical species comprising the sample. There are many cases, however, in which those constraints are not effective and where alternative approaches may provide new analytical insights. For many cases of practical importance, imaged samples are "simple" in the sense that they consist of relatively discrete chemical phases. That is, at any given location, only one or a few of the chemical species comprising the entire sample have non-zero concentrations. The methods of spectral image analysis of the present invention exploit this simplicity in the spatial domain to make the resulting factor models more realistic. Therefore, more physically accurate and interpretable spectral and abundance components can be extracted from spectral images that have spatially simple structure.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phillips, Carolyn L.; Guo, Hanqi; Peterka, Tom

    In type-II superconductors, the dynamics of magnetic flux vortices determine their transport properties. In the Ginzburg-Landau theory, vortices correspond to topological defects in the complex order parameter field. Earlier, in Phillips et al. [Phys. Rev. E 91, 023311 (2015)], we introduced a method for extracting vortices from the discretized complex order parameter field generated by a large-scale simulation of vortex matter. With this method, at a fixed time step, each vortex [simplistically, a one-dimensional (1D) curve in 3D space] can be represented as a connected graph extracted from the discretized field. Here we extend this method as a function ofmore » time as well. A vortex now corresponds to a 2D space-time sheet embedded in 4D space time that can be represented as a connected graph extracted from the discretized field over both space and time. Vortices that interact by merging or splitting correspond to disappearance and appearance of holes in the connected graph in the time direction. This method of tracking vortices, which makes no assumptions about the scale or behavior of the vortices, can track the vortices with a resolution as good as the discretization of the temporally evolving complex scalar field. Additionally, even details of the trajectory between time steps can be reconstructed from the connected graph. With this form of vortex tracking, the details of vortex dynamics in a model of a superconducting materials can be understood in greater detail than previously possible.« less

  17. Adjoint-Based Methodology for Time-Dependent Optimization

    NASA Technical Reports Server (NTRS)

    Yamaleev, N. K.; Diskin, B.; Nielsen, E. J.

    2008-01-01

    This paper presents a discrete adjoint method for a broad class of time-dependent optimization problems. The time-dependent adjoint equations are derived in terms of the discrete residual of an arbitrary finite volume scheme which approximates unsteady conservation law equations. Although only the 2-D unsteady Euler equations are considered in the present analysis, this time-dependent adjoint method is applicable to the 3-D unsteady Reynolds-averaged Navier-Stokes equations with minor modifications. The discrete adjoint operators involving the derivatives of the discrete residual and the cost functional with respect to the flow variables are computed using a complex-variable approach, which provides discrete consistency and drastically reduces the implementation and debugging cycle. The implementation of the time-dependent adjoint method is validated by comparing the sensitivity derivative with that obtained by forward mode differentiation. Our numerical results show that O(10) optimization iterations of the steepest descent method are needed to reduce the objective functional by 3-6 orders of magnitude for test problems considered.

  18. Characterization of the Nencki Affective Picture System by discrete emotional categories (NAPS BE).

    PubMed

    Riegel, Monika; Żurawski, Łukasz; Wierzba, Małgorzata; Moslehi, Abnoss; Klocek, Łukasz; Horvat, Marko; Grabowska, Anna; Michałowski, Jarosław; Jednoróg, Katarzyna; Marchewka, Artur

    2016-06-01

    The Nencki Affective Picture System (NAPS; Marchewka, Żurawski, Jednoróg, & Grabowska, Behavior Research Methods, 2014) is a standardized set of 1,356 realistic, high-quality photographs divided into five categories (people, faces, animals, objects, and landscapes). NAPS has been primarily standardized along the affective dimensions of valence, arousal, and approach-avoidance, yet the characteristics of discrete emotions expressed by the images have not been investigated thus far. The aim of the present study was to collect normative ratings according to categorical models of emotions. A subset of 510 images from the original NAPS set was selected in order to proportionally cover the whole dimensional affective space. Among these, using three available classification methods, we identified images eliciting distinguishable discrete emotions. We introduce the basic-emotion normative ratings for the Nencki Affective Picture System (NAPS BE), which will allow researchers to control and manipulate stimulus properties specifically for their experimental questions of interest. The NAPS BE system is freely accessible to the scientific community for noncommercial use as supplementary materials to this article.

  19. Unstructured Cartesian/prismatic grid generation for complex geometries

    NASA Technical Reports Server (NTRS)

    Karman, Steve L., Jr.

    1995-01-01

    The generation of a hybrid grid system for discretizing complex three dimensional (3D) geometries is described. The primary grid system is an unstructured Cartesian grid automatically generated using recursive cell subdivision. This grid system is sufficient for computing Euler solutions about extremely complex 3D geometries. A secondary grid system, using triangular-prismatic elements, may be added for resolving the boundary layer region of viscous flows near surfaces of solid bodies. This paper describes the grid generation processes used to generate each grid type. Several example grids are shown, demonstrating the ability of the method to discretize complex geometries, with very little pre-processing required by the user.

  20. Light-triggered self-assembly of triarylamine-based nanospheres

    NASA Astrophysics Data System (ADS)

    Moulin, Emilie; Niess, Frédéric; Fuks, Gad; Jouault, Nicolas; Buhler, Eric; Giuseppone, Nicolas

    2012-10-01

    Tailored triarylamine units modified with terpyridine ligands were coordinated to Zn2+ ions and characterized as discrete dimeric entities. Interestingly, when these complexes were subsequently irradiated with simple visible light in chloroform, they readily self-assembled into monodisperse spheres with a mean diameter of 160 nm.Tailored triarylamine units modified with terpyridine ligands were coordinated to Zn2+ ions and characterized as discrete dimeric entities. Interestingly, when these complexes were subsequently irradiated with simple visible light in chloroform, they readily self-assembled into monodisperse spheres with a mean diameter of 160 nm. Electronic supplementary information (ESI) available: Synthetic procedures and products' characterization (2-4 and 6-9). 1H NMR titration of compound 6 by Zn(OTf)2 to form complex 7. Kinetic measurements by UV-Vis-NIR spectroscopy. Transmission electron microscopy imaging for complexes 8 and 9. UV-Vis-NIR for an Fe2+ analogue of complex 7. Dynamic light scattering and time autocorrelation function for self-assembly of complexes 7-9. Copies of 1H and 13C NMR spectra for compounds 2-4 and 6. See DOI: 10.1039/c2nr32168h

  1. Improved damage imaging in aerospace structures using a piezoceramic hybrid pin-force wave generation model

    NASA Astrophysics Data System (ADS)

    Ostiguy, Pierre-Claude; Quaegebeur, Nicolas; Masson, Patrice

    2014-03-01

    In this study, a correlation-based imaging technique called "Excitelet" is used to monitor an aerospace grade aluminum plate, representative of an aircraft component. The principle is based on ultrasonic guided wave generation and sensing using three piezoceramic (PZT) transducers, and measurement of reflections induced by potential defects. The method uses a propagation model to correlate measured signals with a bank of signals and imaging is performed using a roundrobin procedure (Full-Matrix Capture). The formulation compares two models for the complex transducer dynamics: one where the shear stress at the tip of the PZT is considered to vary as a function of the frequency generated, and one where the PZT is discretized in order to consider the shear distribution under the PZT. This method allows taking into account the transducer dynamics and finite dimensions, multi-modal and dispersive characteristics of the material and complex interactions between guided wave and damages. Experimental validation has been conducted on an aerospace grade aluminum joint instrumented with three circular PZTs of 10 mm diameter. A magnet, acting as a reflector, is used in order to simulate a local reflection in the structure. It is demonstrated that the defect can be accurately detected and localized. The two models proposed are compared to the classical pin-force model, using narrow and broad-band excitations. The results demonstrate the potential of the proposed imaging techniques for damage monitoring of aerospace structures considering improved models for guided wave generation and propagation.

  2. Discrete particle swarm optimization for identifying community structures in signed social networks.

    PubMed

    Cai, Qing; Gong, Maoguo; Shen, Bo; Ma, Lijia; Jiao, Licheng

    2014-10-01

    Modern science of networks has facilitated us with enormous convenience to the understanding of complex systems. Community structure is believed to be one of the notable features of complex networks representing real complicated systems. Very often, uncovering community structures in networks can be regarded as an optimization problem, thus, many evolutionary algorithms based approaches have been put forward. Particle swarm optimization (PSO) is an artificial intelligent algorithm originated from social behavior such as birds flocking and fish schooling. PSO has been proved to be an effective optimization technique. However, PSO was originally designed for continuous optimization which confounds its applications to discrete contexts. In this paper, a novel discrete PSO algorithm is suggested for identifying community structures in signed networks. In the suggested method, particles' status has been redesigned in discrete form so as to make PSO proper for discrete scenarios, and particles' updating rules have been reformulated by making use of the topology of the signed network. Extensive experiments compared with three state-of-the-art approaches on both synthetic and real-world signed networks demonstrate that the proposed method is effective and promising. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Low-complex energy-aware image communication in visual sensor networks

    NASA Astrophysics Data System (ADS)

    Phamila, Yesudhas Asnath Victy; Amutha, Ramachandran

    2013-10-01

    A low-complex, low bit rate, energy-efficient image compression algorithm explicitly designed for resource-constrained visual sensor networks applied for surveillance, battle field, habitat monitoring, etc. is presented, where voluminous amount of image data has to be communicated over a bandwidth-limited wireless medium. The proposed method overcomes the energy limitation of individual nodes and is investigated in terms of image quality, entropy, processing time, overall energy consumption, and system lifetime. This algorithm is highly energy efficient and extremely fast since it applies energy-aware zonal binary discrete cosine transform (DCT) that computes only the few required significant coefficients and codes them using enhanced complementary Golomb Rice code without using any floating point operations. Experiments are performed using the Atmel Atmega128 and MSP430 processors to measure the resultant energy savings. Simulation results show that the proposed energy-aware fast zonal transform consumes only 0.3% of energy needed by conventional DCT. This algorithm consumes only 6% of energy needed by Independent JPEG Group (fast) version, and it suits for embedded systems requiring low power consumption. The proposed scheme is unique since it significantly enhances the lifetime of the camera sensor node and the network without any need for distributed processing as was traditionally required in existing algorithms.

  4. Image-based model of the spectrin cytoskeleton for red blood cell simulation.

    PubMed

    Fai, Thomas G; Leo-Macias, Alejandra; Stokes, David L; Peskin, Charles S

    2017-10-01

    We simulate deformable red blood cells in the microcirculation using the immersed boundary method with a cytoskeletal model that incorporates structural details revealed by tomographic images. The elasticity of red blood cells is known to be supplied by both their lipid bilayer membranes, which resist bending and local changes in area, and their cytoskeletons, which resist in-plane shear. The cytoskeleton consists of spectrin tetramers that are tethered to the lipid bilayer by ankyrin and by actin-based junctional complexes. We model the cytoskeleton as a random geometric graph, with nodes corresponding to junctional complexes and with edges corresponding to spectrin tetramers such that the edge lengths are given by the end-to-end distances between nodes. The statistical properties of this graph are based on distributions gathered from three-dimensional tomographic images of the cytoskeleton by a segmentation algorithm. We show that the elastic response of our model cytoskeleton, in which the spectrin polymers are treated as entropic springs, is in good agreement with the experimentally measured shear modulus. By simulating red blood cells in flow with the immersed boundary method, we compare this discrete cytoskeletal model to an existing continuum model and predict the extent to which dynamic spectrin network connectivity can protect against failure in the case of a red cell subjected to an applied strain. The methods presented here could form the basis of disease- and patient-specific computational studies of hereditary diseases affecting the red cell cytoskeleton.

  5. Image-based model of the spectrin cytoskeleton for red blood cell simulation

    PubMed Central

    Stokes, David L.; Peskin, Charles S.

    2017-01-01

    We simulate deformable red blood cells in the microcirculation using the immersed boundary method with a cytoskeletal model that incorporates structural details revealed by tomographic images. The elasticity of red blood cells is known to be supplied by both their lipid bilayer membranes, which resist bending and local changes in area, and their cytoskeletons, which resist in-plane shear. The cytoskeleton consists of spectrin tetramers that are tethered to the lipid bilayer by ankyrin and by actin-based junctional complexes. We model the cytoskeleton as a random geometric graph, with nodes corresponding to junctional complexes and with edges corresponding to spectrin tetramers such that the edge lengths are given by the end-to-end distances between nodes. The statistical properties of this graph are based on distributions gathered from three-dimensional tomographic images of the cytoskeleton by a segmentation algorithm. We show that the elastic response of our model cytoskeleton, in which the spectrin polymers are treated as entropic springs, is in good agreement with the experimentally measured shear modulus. By simulating red blood cells in flow with the immersed boundary method, we compare this discrete cytoskeletal model to an existing continuum model and predict the extent to which dynamic spectrin network connectivity can protect against failure in the case of a red cell subjected to an applied strain. The methods presented here could form the basis of disease- and patient-specific computational studies of hereditary diseases affecting the red cell cytoskeleton. PMID:28991926

  6. A Numerical Method for Solving the 3D Unsteady Incompressible Navier-Stokes Equations in Curvilinear Domains with Complex Immersed Boundaries.

    PubMed

    Ge, Liang; Sotiropoulos, Fotis

    2007-08-01

    A novel numerical method is developed that integrates boundary-conforming grids with a sharp interface, immersed boundary methodology. The method is intended for simulating internal flows containing complex, moving immersed boundaries such as those encountered in several cardiovascular applications. The background domain (e.g the empty aorta) is discretized efficiently with a curvilinear boundary-fitted mesh while the complex moving immersed boundary (say a prosthetic heart valve) is treated with the sharp-interface, hybrid Cartesian/immersed-boundary approach of Gilmanov and Sotiropoulos [1]. To facilitate the implementation of this novel modeling paradigm in complex flow simulations, an accurate and efficient numerical method is developed for solving the unsteady, incompressible Navier-Stokes equations in generalized curvilinear coordinates. The method employs a novel, fully-curvilinear staggered grid discretization approach, which does not require either the explicit evaluation of the Christoffel symbols or the discretization of all three momentum equations at cell interfaces as done in previous formulations. The equations are integrated in time using an efficient, second-order accurate fractional step methodology coupled with a Jacobian-free, Newton-Krylov solver for the momentum equations and a GMRES solver enhanced with multigrid as preconditioner for the Poisson equation. Several numerical experiments are carried out on fine computational meshes to demonstrate the accuracy and efficiency of the proposed method for standard benchmark problems as well as for unsteady, pulsatile flow through a curved, pipe bend. To demonstrate the ability of the method to simulate flows with complex, moving immersed boundaries we apply it to calculate pulsatile, physiological flow through a mechanical, bileaflet heart valve mounted in a model straight aorta with an anatomical-like triple sinus.

  7. A Numerical Method for Solving the 3D Unsteady Incompressible Navier-Stokes Equations in Curvilinear Domains with Complex Immersed Boundaries

    PubMed Central

    Ge, Liang; Sotiropoulos, Fotis

    2008-01-01

    A novel numerical method is developed that integrates boundary-conforming grids with a sharp interface, immersed boundary methodology. The method is intended for simulating internal flows containing complex, moving immersed boundaries such as those encountered in several cardiovascular applications. The background domain (e.g the empty aorta) is discretized efficiently with a curvilinear boundary-fitted mesh while the complex moving immersed boundary (say a prosthetic heart valve) is treated with the sharp-interface, hybrid Cartesian/immersed-boundary approach of Gilmanov and Sotiropoulos [1]. To facilitate the implementation of this novel modeling paradigm in complex flow simulations, an accurate and efficient numerical method is developed for solving the unsteady, incompressible Navier-Stokes equations in generalized curvilinear coordinates. The method employs a novel, fully-curvilinear staggered grid discretization approach, which does not require either the explicit evaluation of the Christoffel symbols or the discretization of all three momentum equations at cell interfaces as done in previous formulations. The equations are integrated in time using an efficient, second-order accurate fractional step methodology coupled with a Jacobian-free, Newton-Krylov solver for the momentum equations and a GMRES solver enhanced with multigrid as preconditioner for the Poisson equation. Several numerical experiments are carried out on fine computational meshes to demonstrate the accuracy and efficiency of the proposed method for standard benchmark problems as well as for unsteady, pulsatile flow through a curved, pipe bend. To demonstrate the ability of the method to simulate flows with complex, moving immersed boundaries we apply it to calculate pulsatile, physiological flow through a mechanical, bileaflet heart valve mounted in a model straight aorta with an anatomical-like triple sinus. PMID:19194533

  8. SSAW: A new sequence similarity analysis method based on the stationary discrete wavelet transform.

    PubMed

    Lin, Jie; Wei, Jing; Adjeroh, Donald; Jiang, Bing-Hua; Jiang, Yue

    2018-05-02

    Alignment-free sequence similarity analysis methods often lead to significant savings in computational time over alignment-based counterparts. A new alignment-free sequence similarity analysis method, called SSAW is proposed. SSAW stands for Sequence Similarity Analysis using the Stationary Discrete Wavelet Transform (SDWT). It extracts k-mers from a sequence, then maps each k-mer to a complex number field. Then, the series of complex numbers formed are transformed into feature vectors using the stationary discrete wavelet transform. After these steps, the original sequence is turned into a feature vector with numeric values, which can then be used for clustering and/or classification. Using two different types of applications, namely, clustering and classification, we compared SSAW against the the-state-of-the-art alignment free sequence analysis methods. SSAW demonstrates competitive or superior performance in terms of standard indicators, such as accuracy, F-score, precision, and recall. The running time was significantly better in most cases. These make SSAW a suitable method for sequence analysis, especially, given the rapidly increasing volumes of sequence data required by most modern applications.

  9. An Efficient Method for Image and Audio Steganography using Least Significant Bit (LSB) Substitution

    NASA Astrophysics Data System (ADS)

    Chadha, Ankit; Satam, Neha; Sood, Rakshak; Bade, Dattatray

    2013-09-01

    In order to improve the data hiding in all types of multimedia data formats such as image and audio and to make hidden message imperceptible, a novel method for steganography is introduced in this paper. It is based on Least Significant Bit (LSB) manipulation and inclusion of redundant noise as secret key in the message. This method is applied to data hiding in images. For data hiding in audio, Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) both are used. All the results displayed prove to be time-efficient and effective. Also the algorithm is tested for various numbers of bits. For those values of bits, Mean Square Error (MSE) and Peak-Signal-to-Noise-Ratio (PSNR) are calculated and plotted. Experimental results show that the stego-image is visually indistinguishable from the original cover-image when n<=4, because of better PSNR which is achieved by this technique. The final results obtained after steganography process does not reveal presence of any hidden message, thus qualifying the criteria of imperceptible message.

  10. Image quality (IQ) guided multispectral image compression

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik

    2016-05-01

    Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

  11. Fast computational scheme of image compression for 32-bit microprocessors

    NASA Technical Reports Server (NTRS)

    Kasperovich, Leonid

    1994-01-01

    This paper presents a new computational scheme of image compression based on the discrete cosine transform (DCT), underlying JPEG and MPEG International Standards. The algorithm for the 2-d DCT computation uses integer operations (register shifts and additions / subtractions only); its computational complexity is about 8 additions per image pixel. As a meaningful example of an on-board image compression application we consider the software implementation of the algorithm for the Mars Rover (Marsokhod, in Russian) imaging system being developed as a part of Mars-96 International Space Project. It's shown that fast software solution for 32-bit microprocessors may compete with the DCT-based image compression hardware.

  12. General linear methods and friends: Toward efficient solutions of multiphysics problems

    NASA Astrophysics Data System (ADS)

    Sandu, Adrian

    2017-07-01

    Time dependent multiphysics partial differential equations are of great practical importance as they model diverse phenomena that appear in mechanical and chemical engineering, aeronautics, astrophysics, meteorology and oceanography, financial modeling, environmental sciences, etc. There is no single best time discretization for the complex multiphysics systems of practical interest. We discuss "multimethod" approaches that combine different time steps and discretizations using the rigourous frameworks provided by Partitioned General Linear Methods and Generalize-structure Additive Runge Kutta Methods..

  13. Robustness of Radiomic Features in [11C]Choline and [18F]FDG PET/CT Imaging of Nasopharyngeal Carcinoma: Impact of Segmentation and Discretization.

    PubMed

    Lu, Lijun; Lv, Wenbing; Jiang, Jun; Ma, Jianhua; Feng, Qianjin; Rahmim, Arman; Chen, Wufan

    2016-12-01

    Radiomic features are increasingly utilized to evaluate tumor heterogeneity in PET imaging and to enable enhanced prediction of therapy response and outcome. An important ingredient to success in translation of radiomic features to clinical reality is to quantify and ascertain their robustness. In the present work, we studied the impact of segmentation and discretization on 88 radiomic features in 2-deoxy-2-[ 18 F]fluoro-D-glucose ([ 18 F]FDG) and [ 11 C]methyl-choline ([ 11 C]choline) positron emission tomography/X-ray computed tomography (PET/CT) imaging of nasopharyngeal carcinoma. Forty patients underwent [ 18 F]FDG PET/CT scans. Of these, nine patients were imaged on a different day utilizing [ 11 C]choline PET/CT. Tumors were delineated using reference manual segmentation by the consensus of three expert physicians, using 41, 50, and 70 % maximum standardized uptake value (SUV max ) threshold with background correction, Nestle's method, and watershed and region growing methods, and then discretized with fixed bin size (0.05, 0.1, 0.2, 0.5, and 1) in units of SUV. A total of 88 features, including 21 first-order intensity features, 10 shape features, and 57 second- and higher-order textural features, were extracted from the tumors. The robustness of the features was evaluated via the intraclass correlation coefficient (ICC) for seven kinds of segmentation methods (involving all 88 features) and five kinds of discretization bin size (involving the 57 second- and higher-order features). Forty-four (50 %) and 55 (63 %) features depicted ICC ≥0.8 with respect to segmentation as obtained from [ 18 F]FDG and [ 11 C]choline, respectively. Thirteen (23 %) and 12 (21 %) features showed ICC ≥0.8 with respect to discretization as obtained from [ 18 F]FDG and [ 11 C]choline, respectively. Six features were obtained from both [ 18 F]FDG and [ 11 C]choline having ICC ≥0.8 for both segmentation and discretization, five of which were gray-level co-occurrence matrix (GLCM) features (SumEntropy, Entropy, DifEntropy, Homogeneity1, and Homogeneity2) and one of which was an neighborhood gray-tone different matrix (NGTDM) feature (Coarseness). Discretization generated larger effects on features than segmentation in both tracers. Features extracted from [ 11 C]choline were more robust than [ 18 F]FDG for segmentation. Discretization had very similar effects on features extracted from both tracers.

  14. Discrete tomography in an in vivo small animal bone study.

    PubMed

    Van de Casteele, Elke; Perilli, Egon; Van Aarle, Wim; Reynolds, Karen J; Sijbers, Jan

    2018-01-01

    This study aimed at assessing the feasibility of a discrete algebraic reconstruction technique (DART) to be used in in vivo small animal bone studies. The advantage of discrete tomography is the possibility to reduce the amount of X-ray projection images, which makes scans faster and implies also a significant reduction of radiation dose, without compromising the reconstruction results. Bone studies are ideal for being performed with discrete tomography, due to the relatively small number of attenuation coefficients contained in the image [namely three: background (air), soft tissue and bone]. In this paper, a validation is made by comparing trabecular bone morphometric parameters calculated from images obtained by using DART and the commonly used standard filtered back-projection (FBP). Female rats were divided into an ovariectomized (OVX) and a sham-operated group. In vivo micro-CT scanning of the tibia was done at baseline and at 2, 4, 8 and 12 weeks after surgery. The cross-section images were reconstructed using first the full set of projection images and afterwards reducing them in number to a quarter and one-sixth (248, 62, 42 projection images, respectively). For both reconstruction methods, similar changes in morphometric parameters were observed over time: bone loss for OVX and bone growth for sham-operated rats, although for DART the actual values were systematically higher (bone volume fraction) or lower (structure model index) compared to FBP, depending on the morphometric parameter. The DART algorithm was, however, more robust when using fewer projection images, where the standard FBP reconstruction was more prone to noise, showing a significantly bigger deviation from the morphometric parameters obtained using all projection images. This study supports the use of DART as a potential alternative method to FBP in X-ray micro-CT animal studies, in particular, when the number of projections has to be drastically minimized, which directly reduces scanning time and dose.

  15. Improved Discrete Approximation of Laplacian of Gaussian

    NASA Technical Reports Server (NTRS)

    Shuler, Robert L., Jr.

    2004-01-01

    An improved method of computing a discrete approximation of the Laplacian of a Gaussian convolution of an image has been devised. The primary advantage of the method is that without substantially degrading the accuracy of the end result, it reduces the amount of information that must be processed and thus reduces the amount of circuitry needed to perform the Laplacian-of- Gaussian (LOG) operation. Some background information is necessary to place the method in context. The method is intended for application to the LOG part of a process of real-time digital filtering of digitized video data that represent brightnesses in pixels in a square array. The particular filtering process of interest is one that converts pixel brightnesses to binary form, thereby reducing the amount of information that must be performed in subsequent correlation processing (e.g., correlations between images in a stereoscopic pair for determining distances or correlations between successive frames of the same image for detecting motions). The Laplacian is often included in the filtering process because it emphasizes edges and textures, while the Gaussian is often included because it smooths out noise that might not be consistent between left and right images or between successive frames of the same image.

  16. A novel method of the image processing on irregular triangular meshes

    NASA Astrophysics Data System (ADS)

    Vishnyakov, Sergey; Pekhterev, Vitaliy; Sokolova, Elizaveta

    2018-04-01

    The paper describes a novel method of the image processing based on irregular triangular meshes implementation. The triangular mesh is adaptive to the image content, least mean square linear approximation is proposed for the basic interpolation within the triangle. It is proposed to use triangular numbers to simplify using of the local (barycentric) coordinates for the further analysis - triangular element of the initial irregular mesh is to be represented through the set of the four equilateral triangles. This allows to use fast and simple pixels indexing in local coordinates, e.g. "for" or "while" loops for access to the pixels. Moreover, representation proposed allows to use discrete cosine transform of the simple "rectangular" symmetric form without additional pixels reordering (as it is used for shape-adaptive DCT forms). Furthermore, this approach leads to the simple form of the wavelet transform on triangular mesh. The results of the method application are presented. It is shown that advantage of the method proposed is a combination of the flexibility of the image-adaptive irregular meshes with the simple form of the pixel indexing in local triangular coordinates and the using of the common forms of the discrete transforms for triangular meshes. Method described is proposed for the image compression, pattern recognition, image quality improvement, image search and indexing. It also may be used as a part of video coding (intra-frame or inter-frame coding, motion detection).

  17. New method for identifying features of an image on a digital video display

    NASA Astrophysics Data System (ADS)

    Doyle, Michael D.

    1991-04-01

    The MetaMap process extends the concept of direct manipulation human-computer interfaces to new limits. Its specific capabilities include the correlation of discrete image elements to relevant text information and the correlation of these image features to other images as well as to program control mechanisms. The correlation is accomplished through reprogramming of both the color map and the image so that discrete image elements comprise unique sets of color indices. This process allows the correlation to be accomplished with very efficient data storage and program execution times. Image databases adapted to this process become object-oriented as a result. Very sophisticated interrelationships can be set up between images text and program control mechanisms using this process. An application of this interfacing process to the design of an interactive atlas of medical histology as well as other possible applications are described. The MetaMap process is protected by U. S. patent #4

  18. Adjoint-Based Algorithms for Adaptation and Design Optimizations on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.

    2006-01-01

    Schemes based on discrete adjoint algorithms present several exciting opportunities for significantly advancing the current state of the art in computational fluid dynamics. Such methods provide an extremely efficient means for obtaining discretely consistent sensitivity information for hundreds of design variables, opening the door to rigorous, automated design optimization of complex aerospace configuration using the Navier-Stokes equation. Moreover, the discrete adjoint formulation provides a mathematically rigorous foundation for mesh adaptation and systematic reduction of spatial discretization error. Error estimates are also an inherent by-product of an adjoint-based approach, valuable information that is virtually non-existent in today's large-scale CFD simulations. An overview of the adjoint-based algorithm work at NASA Langley Research Center is presented, with examples demonstrating the potential impact on complex computational problems related to design optimization as well as mesh adaptation.

  19. A numerical method for solving the 3D unsteady incompressible Navier Stokes equations in curvilinear domains with complex immersed boundaries

    NASA Astrophysics Data System (ADS)

    Ge, Liang; Sotiropoulos, Fotis

    2007-08-01

    A novel numerical method is developed that integrates boundary-conforming grids with a sharp interface, immersed boundary methodology. The method is intended for simulating internal flows containing complex, moving immersed boundaries such as those encountered in several cardiovascular applications. The background domain (e.g. the empty aorta) is discretized efficiently with a curvilinear boundary-fitted mesh while the complex moving immersed boundary (say a prosthetic heart valve) is treated with the sharp-interface, hybrid Cartesian/immersed-boundary approach of Gilmanov and Sotiropoulos [A. Gilmanov, F. Sotiropoulos, A hybrid cartesian/immersed boundary method for simulating flows with 3d, geometrically complex, moving bodies, Journal of Computational Physics 207 (2005) 457-492.]. To facilitate the implementation of this novel modeling paradigm in complex flow simulations, an accurate and efficient numerical method is developed for solving the unsteady, incompressible Navier-Stokes equations in generalized curvilinear coordinates. The method employs a novel, fully-curvilinear staggered grid discretization approach, which does not require either the explicit evaluation of the Christoffel symbols or the discretization of all three momentum equations at cell interfaces as done in previous formulations. The equations are integrated in time using an efficient, second-order accurate fractional step methodology coupled with a Jacobian-free, Newton-Krylov solver for the momentum equations and a GMRES solver enhanced with multigrid as preconditioner for the Poisson equation. Several numerical experiments are carried out on fine computational meshes to demonstrate the accuracy and efficiency of the proposed method for standard benchmark problems as well as for unsteady, pulsatile flow through a curved, pipe bend. To demonstrate the ability of the method to simulate flows with complex, moving immersed boundaries we apply it to calculate pulsatile, physiological flow through a mechanical, bileaflet heart valve mounted in a model straight aorta with an anatomical-like triple sinus.

  20. Self-adaptive difference method for the effective solution of computationally complex problems of boundary layer theory

    NASA Technical Reports Server (NTRS)

    Schoenauer, W.; Daeubler, H. G.; Glotz, G.; Gruening, J.

    1986-01-01

    An implicit difference procedure for the solution of equations for a chemically reacting hypersonic boundary layer is described. Difference forms of arbitrary error order in the x and y coordinate plane were used to derive estimates for discretization error. Computational complexity and time were minimized by the use of this difference method and the iteration of the nonlinear boundary layer equations was regulated by discretization error. Velocity and temperature profiles are presented for Mach 20.14 and Mach 18.5; variables are velocity profiles, temperature profiles, mass flow factor, Stanton number, and friction drag coefficient; three figures include numeric data.

  1. Exponential convergence through linear finite element discretization of stratified subdomains

    NASA Astrophysics Data System (ADS)

    Guddati, Murthy N.; Druskin, Vladimir; Vaziri Astaneh, Ali

    2016-10-01

    Motivated by problems where the response is needed at select localized regions in a large computational domain, we devise a novel finite element discretization that results in exponential convergence at pre-selected points. The key features of the discretization are (a) use of midpoint integration to evaluate the contribution matrices, and (b) an unconventional mapping of the mesh into complex space. Named complex-length finite element method (CFEM), the technique is linked to Padé approximants that provide exponential convergence of the Dirichlet-to-Neumann maps and thus the solution at specified points in the domain. Exponential convergence facilitates drastic reduction in the number of elements. This, combined with sparse computation associated with linear finite elements, results in significant reduction in the computational cost. The paper presents the basic ideas of the method as well as illustration of its effectiveness for a variety of problems involving Laplace, Helmholtz and elastodynamics equations.

  2. Analytic solutions to modelling exponential and harmonic functions using Chebyshev polynomials: fitting frequency-domain lifetime images with photobleaching.

    PubMed

    Malachowski, George C; Clegg, Robert M; Redford, Glen I

    2007-12-01

    A novel approach is introduced for modelling linear dynamic systems composed of exponentials and harmonics. The method improves the speed of current numerical techniques up to 1000-fold for problems that have solutions of multiple exponentials plus harmonics and decaying components. Such signals are common in fluorescence microscopy experiments. Selective constraints of the parameters being fitted are allowed. This method, using discrete Chebyshev transforms, will correctly fit large volumes of data using a noniterative, single-pass routine that is fast enough to analyse images in real time. The method is applied to fluorescence lifetime imaging data in the frequency domain with varying degrees of photobleaching over the time of total data acquisition. The accuracy of the Chebyshev method is compared to a simple rapid discrete Fourier transform (equivalent to least-squares fitting) that does not take the photobleaching into account. The method can be extended to other linear systems composed of different functions. Simulations are performed and applications are described showing the utility of the method, in particular in the area of fluorescence microscopy.

  3. High-Order Methods for Incompressible Fluid Flow

    NASA Astrophysics Data System (ADS)

    Deville, M. O.; Fischer, P. F.; Mund, E. H.

    2002-08-01

    High-order numerical methods provide an efficient approach to simulating many physical problems. This book considers the range of mathematical, engineering, and computer science topics that form the foundation of high-order numerical methods for the simulation of incompressible fluid flows in complex domains. Introductory chapters present high-order spatial and temporal discretizations for one-dimensional problems. These are extended to multiple space dimensions with a detailed discussion of tensor-product forms, multi-domain methods, and preconditioners for iterative solution techniques. Numerous discretizations of the steady and unsteady Stokes and Navier-Stokes equations are presented, with particular sttention given to enforcement of imcompressibility. Advanced discretizations. implementation issues, and parallel and vector performance are considered in the closing sections. Numerous examples are provided throughout to illustrate the capabilities of high-order methods in actual applications.

  4. Displaying radiologic images on personal computers: image storage and compression--Part 2.

    PubMed

    Gillespy, T; Rowberg, A H

    1994-02-01

    This is part 2 of our article on image storage and compression, the third article of our series for radiologists and imaging scientists on displaying, manipulating, and analyzing radiologic images on personal computers. Image compression is classified as lossless (nondestructive) or lossy (destructive). Common lossless compression algorithms include variable-length bit codes (Huffman codes and variants), dictionary-based compression (Lempel-Ziv variants), and arithmetic coding. Huffman codes and the Lempel-Ziv-Welch (LZW) algorithm are commonly used for image compression. All of these compression methods are enhanced if the image has been transformed into a differential image based on a differential pulse-code modulation (DPCM) algorithm. The LZW compression after the DPCM image transformation performed the best on our example images, and performed almost as well as the best of the three commercial compression programs tested. Lossy compression techniques are capable of much higher data compression, but reduced image quality and compression artifacts may be noticeable. Lossy compression is comprised of three steps: transformation, quantization, and coding. Two commonly used transformation methods are the discrete cosine transformation and discrete wavelet transformation. In both methods, most of the image information is contained in a relatively few of the transformation coefficients. The quantization step reduces many of the lower order coefficients to 0, which greatly improves the efficiency of the coding (compression) step. In fractal-based image compression, image patterns are stored as equations that can be reconstructed at different levels of resolution.

  5. Modelling of high-frequency structure-borne sound transmission on FEM grids using the Discrete Flow Mapping technique

    NASA Astrophysics Data System (ADS)

    Hartmann, Timo; Tanner, Gregor; Xie, Gang; Chappell, David; Bajars, Janis

    2016-09-01

    Dynamical Energy Analysis (DEA) combined with the Discrete Flow Mapping technique (DFM) has recently been introduced as a mesh-based high frequency method modelling structure borne sound for complex built-up structures. This has proven to enhance vibro-acoustic simulations considerably by making it possible to work directly on existing finite element meshes circumventing time-consuming and costly re-modelling strategies. In addition, DFM provides detailed spatial information about the vibrational energy distribution within a complex structure in the mid-to-high frequency range. We will present here progress in the development of the DEA method towards handling complex FEM-meshes including Rigid Body Elements. In addition, structure borne transmission paths due to spot welds are considered. We will present applications for a car floor structure.

  6. Simultaneous storage of medical images in the spatial and frequency domain: A comparative study

    PubMed Central

    Nayak, Jagadish; Bhat, P Subbanna; Acharya U, Rajendra; UC, Niranjan

    2004-01-01

    Background Digital watermarking is a technique of hiding specific identification data for copyright authentication. This technique is adapted here for interleaving patient information with medical images, to reduce storage and transmission overheads. Methods The patient information is encrypted before interleaving with images to ensure greater security. The bio-signals are compressed and subsequently interleaved with the image. This interleaving is carried out in the spatial domain and Frequency domain. The performance of interleaving in the spatial, Discrete Fourier Transform (DFT), Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) coefficients is studied. Differential pulse code modulation (DPCM) is employed for data compression as well as encryption and results are tabulated for a specific example. Results It can be seen from results, the process does not affect the picture quality. This is attributed to the fact that the change in LSB of a pixel changes its brightness by 1 part in 256. Spatial and DFT domain interleaving gave very less %NRMSE as compared to DCT and DWT domain. Conclusion The Results show that spatial domain the interleaving, the %NRMSE was less than 0.25% for 8-bit encoded pixel intensity. Among the frequency domain interleaving methods, DFT was found to be very efficient. PMID:15180899

  7. Design of discrete and continuous super-resolving Toraldo pupils in the microwave range.

    PubMed

    Olmi, Luca; Bolli, Pietro; Mugnai, Daniela

    2018-03-20

    The concept of super-resolution refers to various methods for improving the angular resolution of an optical imaging system beyond the classical diffraction limit. In optical microscopy, several techniques have been successfully developed with the aim of narrowing the central lobe of the illumination point spread function. In astronomy, however, no similar techniques can be used. A feasible method to design antennas and telescopes with angular resolution better than the diffraction limit consists of using variable transmittance pupils. In particular, discrete binary phase masks (0 or π ) with finite phase-jump positions, known as Toraldo pupils (TPs), have the advantage of being easy to fabricate but offer relatively little flexibility in terms of achieving specific trade-offs between design parameters, such as the angular width of the main lobe and the intensity of sidelobes. In this paper, we show that a complex transmittance filter (equivalent to a continuous TP, i.e., consisting of infinitely narrow concentric rings) can achieve more easily the desired trade-off between design parameters. We also show how the super-resolution effect can be generated with both amplitude- and phase-only masks and confirm the expected performance with electromagnetic numerical simulations in the microwave range.

  8. Fast RBF OGr for solving PDEs on arbitrary surfaces

    NASA Astrophysics Data System (ADS)

    Piret, Cécile; Dunn, Jarrett

    2016-10-01

    The Radial Basis Functions Orthogonal Gradients method (RBF-OGr) was introduced in [1] to discretize differential operators defined on arbitrary manifolds defined only by a point cloud. We take advantage of the meshfree character of RBFs, which give us a high accuracy and the flexibility to represent complex geometries in any spatial dimension. A large limitation of the RBF-OGr method was its large computational complexity, which greatly restricted the size of the point cloud. In this paper, we apply the RBF-Finite Difference (RBF-FD) technique to the RBF-OGr method for building sparse differentiation matrices discretizing continuous differential operators such as the Laplace-Beltrami operator. This method can be applied to solving PDEs on arbitrary surfaces embedded in ℛ3. We illustrate the accuracy of our new method by solving the heat equation on the unit sphere.

  9. Visual word ambiguity.

    PubMed

    van Gemert, Jan C; Veenman, Cor J; Smeulders, Arnold W M; Geusebroek, Jan-Mark

    2010-07-01

    This paper studies automatic image classification by modeling soft assignment in the popular codebook model. The codebook model describes an image as a bag of discrete visual words selected from a vocabulary, where the frequency distributions of visual words in an image allow classification. One inherent component of the codebook model is the assignment of discrete visual words to continuous image features. Despite the clear mismatch of this hard assignment with the nature of continuous features, the approach has been successfully applied for some years. In this paper, we investigate four types of soft assignment of visual words to image features. We demonstrate that explicitly modeling visual word assignment ambiguity improves classification performance compared to the hard assignment of the traditional codebook model. The traditional codebook model is compared against our method for five well-known data sets: 15 natural scenes, Caltech-101, Caltech-256, and Pascal VOC 2007/2008. We demonstrate that large codebook vocabulary sizes completely deteriorate the performance of the traditional model, whereas the proposed model performs consistently. Moreover, we show that our method profits in high-dimensional feature spaces and reaps higher benefits when increasing the number of image categories.

  10. Optical-domain subsampling for data efficient depth ranging in Fourier-domain optical coherence tomography

    PubMed Central

    Siddiqui, Meena; Vakoc, Benjamin J.

    2012-01-01

    Recent advances in optical coherence tomography (OCT) have led to higher-speed sources that support imaging over longer depth ranges. Limitations in the bandwidth of state-of-the-art acquisition electronics, however, prevent adoption of these advances into the clinical applications. Here, we introduce optical-domain subsampling as a method for imaging at high-speeds and over extended depth ranges but with a lower acquisition bandwidth than that required using conventional approaches. Optically subsampled laser sources utilize a discrete set of wavelengths to alias fringe signals along an extended depth range into a bandwidth limited frequency window. By detecting the complex fringe signals and under the assumption of a depth-constrained signal, optical-domain subsampling enables recovery of the depth-resolved scattering signal without overlapping artifacts from this bandwidth-limited window. We highlight key principles behind optical-domain subsampled imaging, and demonstrate this principle experimentally using a polygon-filter based swept-source laser that includes an intra-cavity Fabry-Perot (FP) etalon. PMID:23038343

  11. Comparative study of the discrete velocity and lattice Boltzmann methods for rarefied gas flows through irregular channels

    NASA Astrophysics Data System (ADS)

    Su, Wei; Lindsay, Scott; Liu, Haihu; Wu, Lei

    2017-08-01

    Rooted from the gas kinetics, the lattice Boltzmann method (LBM) is a powerful tool in modeling hydrodynamics. In the past decade, it has been extended to simulate rarefied gas flows beyond the Navier-Stokes level, either by using the high-order Gauss-Hermite quadrature, or by introducing the relaxation time that is a function of the gas-wall distance. While the former method, with a limited number of discrete velocities (e.g., D2Q36), is accurate up to the early transition flow regime, the latter method (especially the multiple relaxation time (MRT) LBM), with the same discrete velocities as those used in simulating hydrodynamics (i.e., D2Q9), is accurate up to the free-molecular flow regime in the planar Poiseuille flow. This is quite astonishing in the sense that less discrete velocities are more accurate. In this paper, by solving the Bhatnagar-Gross-Krook kinetic equation accurately via the discrete velocity method, we find that the high-order Gauss-Hermite quadrature cannot describe the large variation in the velocity distribution function when the rarefaction effect is strong, but the MRT-LBM can capture the flow velocity well because it is equivalent to solving the Navier-Stokes equations with an effective shear viscosity. Since the MRT-LBM has only been validated in simple channel flows, and for complex geometries it is difficult to find the effective viscosity, it is necessary to assess its performance for the simulation of rarefied gas flows. Our numerical simulations based on the accurate discrete velocity method suggest that the accuracy of the MRT-LBM is reduced significantly in the simulation of rarefied gas flows through the rough surface and porous media. Our simulation results could serve as benchmarking cases for future development of the LBM for modeling and simulation of rarefied gas flows in complex geometries.

  12. Comparative study of the discrete velocity and lattice Boltzmann methods for rarefied gas flows through irregular channels.

    PubMed

    Su, Wei; Lindsay, Scott; Liu, Haihu; Wu, Lei

    2017-08-01

    Rooted from the gas kinetics, the lattice Boltzmann method (LBM) is a powerful tool in modeling hydrodynamics. In the past decade, it has been extended to simulate rarefied gas flows beyond the Navier-Stokes level, either by using the high-order Gauss-Hermite quadrature, or by introducing the relaxation time that is a function of the gas-wall distance. While the former method, with a limited number of discrete velocities (e.g., D2Q36), is accurate up to the early transition flow regime, the latter method (especially the multiple relaxation time (MRT) LBM), with the same discrete velocities as those used in simulating hydrodynamics (i.e., D2Q9), is accurate up to the free-molecular flow regime in the planar Poiseuille flow. This is quite astonishing in the sense that less discrete velocities are more accurate. In this paper, by solving the Bhatnagar-Gross-Krook kinetic equation accurately via the discrete velocity method, we find that the high-order Gauss-Hermite quadrature cannot describe the large variation in the velocity distribution function when the rarefaction effect is strong, but the MRT-LBM can capture the flow velocity well because it is equivalent to solving the Navier-Stokes equations with an effective shear viscosity. Since the MRT-LBM has only been validated in simple channel flows, and for complex geometries it is difficult to find the effective viscosity, it is necessary to assess its performance for the simulation of rarefied gas flows. Our numerical simulations based on the accurate discrete velocity method suggest that the accuracy of the MRT-LBM is reduced significantly in the simulation of rarefied gas flows through the rough surface and porous media. Our simulation results could serve as benchmarking cases for future development of the LBM for modeling and simulation of rarefied gas flows in complex geometries.

  13. Research on image complexity evaluation method based on color information

    NASA Astrophysics Data System (ADS)

    Wang, Hao; Duan, Jin; Han, Xue-hui; Xiao, Bo

    2017-11-01

    In order to evaluate the complexity of a color image more effectively and find the connection between image complexity and image information, this paper presents a method to compute the complexity of image based on color information.Under the complexity ,the theoretical analysis first divides the complexity from the subjective level, divides into three levels: low complexity, medium complexity and high complexity, and then carries on the image feature extraction, finally establishes the function between the complexity value and the color characteristic model. The experimental results show that this kind of evaluation method can objectively reconstruct the complexity of the image from the image feature research. The experimental results obtained by the method of this paper are in good agreement with the results of human visual perception complexity,Color image complexity has a certain reference value.

  14. Detection of fuze defects by image-processing methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chung, M.J.

    1988-03-01

    This paper describes experimental studies of the detection of mechanical defects by the application of computer-processing methods to real-time radiographic images of fuze assemblies. The experimental results confirm that a new algorithm developed at Materials Research Laboratory has potential for the automatic inspection of these assemblies and of others that contain discrete components. The algorithm was applied to images that contain a range of grey levels and has been found to be tolerant to image variations encountered under simulated production conditions.

  15. Scaled nonuniform Fourier transform for image reconstruction in swept source optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Mezgebo, Biniyam; Nagib, Karim; Fernando, Namal; Kordi, Behzad; Sherif, Sherif

    2018-02-01

    Swept Source optical coherence tomography (SS-OCT) is an important imaging modality for both medical and industrial diagnostic applications. A cross-sectional SS-OCT image is obtained by applying an inverse discrete Fourier transform (DFT) to axial interferograms measured in the frequency domain (k-space). This inverse DFT is typically implemented as a fast Fourier transform (FFT) that requires the data samples to be equidistant in k-space. As the frequency of light produced by a typical wavelength-swept laser is nonlinear in time, the recorded interferogram samples will not be uniformly spaced in k-space. Many image reconstruction methods have been proposed to overcome this problem. Most such methods rely on oversampling the measured interferogram then use either hardware, e.g., Mach-Zhender interferometer as a frequency clock module, or software, e.g., interpolation in k-space, to obtain equally spaced samples that are suitable for the FFT. To overcome the problem of nonuniform sampling in k-space without any need for interferogram oversampling, an earlier method demonstrated the use of the nonuniform discrete Fourier transform (NDFT) for image reconstruction in SS-OCT. In this paper, we present a more accurate method for SS-OCT image reconstruction from nonuniform samples in k-space using a scaled nonuniform Fourier transform. The result is demonstrated using SS-OCT images of Axolotl salamander eggs.

  16. 3D imaging of nanomaterials by discrete tomography.

    PubMed

    Batenburg, K J; Bals, S; Sijbers, J; Kübel, C; Midgley, P A; Hernandez, J C; Kaiser, U; Encina, E R; Coronado, E A; Van Tendeloo, G

    2009-05-01

    The field of discrete tomography focuses on the reconstruction of samples that consist of only a few different materials. Ideally, a three-dimensional (3D) reconstruction of such a sample should contain only one grey level for each of the compositions in the sample. By exploiting this property in the reconstruction algorithm, either the quality of the reconstruction can be improved significantly, or the number of required projection images can be reduced. The discrete reconstruction typically contains fewer artifacts and does not have to be segmented, as it already contains one grey level for each composition. Recently, a new algorithm, called discrete algebraic reconstruction technique (DART), has been proposed that can be used effectively on experimental electron tomography datasets. In this paper, we propose discrete tomography as a general reconstruction method for electron tomography in materials science. We describe the basic principles of DART and show that it can be applied successfully to three different types of samples, consisting of embedded ErSi(2) nanocrystals, a carbon nanotube grown from a catalyst particle and a single gold nanoparticle, respectively.

  17. Random discrete linear canonical transform.

    PubMed

    Wei, Deyun; Wang, Ruikui; Li, Yuan-Min

    2016-12-01

    Linear canonical transforms (LCTs) are a family of integral transforms with wide applications in optical, acoustical, electromagnetic, and other wave propagation problems. In this paper, we propose the random discrete linear canonical transform (RDLCT) by randomizing the kernel transform matrix of the discrete linear canonical transform (DLCT). The RDLCT inherits excellent mathematical properties from the DLCT along with some fantastic features of its own. It has a greater degree of randomness because of the randomization in terms of both eigenvectors and eigenvalues. Numerical simulations demonstrate that the RDLCT has an important feature that the magnitude and phase of its output are both random. As an important application of the RDLCT, it can be used for image encryption. The simulation results demonstrate that the proposed encryption method is a security-enhanced image encryption scheme.

  18. Development and Application of Agglomerated Multigrid Methods for Complex Geometries

    NASA Technical Reports Server (NTRS)

    Nishikawa, Hiroaki; Diskin, Boris; Thomas, James L.

    2010-01-01

    We report progress in the development of agglomerated multigrid techniques for fully un- structured grids in three dimensions, building upon two previous studies focused on efficiently solving a model diffusion equation. We demonstrate a robust fully-coarsened agglomerated multigrid technique for 3D complex geometries, incorporating the following key developments: consistent and stable coarse-grid discretizations, a hierarchical agglomeration scheme, and line-agglomeration/relaxation using prismatic-cell discretizations in the highly-stretched grid regions. A signi cant speed-up in computer time is demonstrated for a model diffusion problem, the Euler equations, and the Reynolds-averaged Navier-Stokes equations for 3D realistic complex geometries.

  19. Fourth-order partial differential equation noise removal on welding images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halim, Suhaila Abd; Ibrahim, Arsmah; Sulong, Tuan Nurul Norazura Tuan

    2015-10-22

    Partial differential equation (PDE) has become one of the important topics in mathematics and is widely used in various fields. It can be used for image denoising in the image analysis field. In this paper, a fourth-order PDE is discussed and implemented as a denoising method on digital images. The fourth-order PDE is solved computationally using finite difference approach and then implemented on a set of digital radiographic images with welding defects. The performance of the discretized model is evaluated using Peak Signal to Noise Ratio (PSNR). Simulation is carried out on the discretized model on different level of Gaussianmore » noise in order to get the maximum PSNR value. The convergence criteria chosen to determine the number of iterations required is measured based on the highest PSNR value. Results obtained show that the fourth-order PDE model produced promising results as an image denoising tool compared with median filter.« less

  20. Computation of scattering matrix elements of large and complex shaped absorbing particles with multilevel fast multipole algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Yueqian; Yang, Minglin; Sheng, Xinqing; Ren, Kuan Fang

    2015-05-01

    Light scattering properties of absorbing particles, such as the mineral dusts, attract a wide attention due to its importance in geophysical and environment researches. Due to the absorbing effect, light scattering properties of particles with absorption differ from those without absorption. Simple shaped absorbing particles such as spheres and spheroids have been well studied with different methods but little work on large complex shaped particles has been reported. In this paper, the surface Integral Equation (SIE) with Multilevel Fast Multipole Algorithm (MLFMA) is applied to study scattering properties of large non-spherical absorbing particles. SIEs are carefully discretized with piecewise linear basis functions on triangle patches to model whole surface of the particle, hence computation resource needs increase much more slowly with the particle size parameter than the volume discretized methods. To improve further its capability, MLFMA is well parallelized with Message Passing Interface (MPI) on distributed memory computer platform. Without loss of generality, we choose the computation of scattering matrix elements of absorbing dust particles as an example. The comparison of the scattering matrix elements computed by our method and the discrete dipole approximation method (DDA) for an ellipsoid dust particle shows that the precision of our method is very good. The scattering matrix elements of large ellipsoid dusts with different aspect ratios and size parameters are computed. To show the capability of the presented algorithm for complex shaped particles, scattering by asymmetry Chebyshev particle with size parameter larger than 600 of complex refractive index m = 1.555 + 0.004 i and different orientations are studied.

  1. Imaging quality evaluation method of pixel coupled electro-optical imaging system

    NASA Astrophysics Data System (ADS)

    He, Xu; Yuan, Li; Jin, Chunqi; Zhang, Xiaohui

    2017-09-01

    With advancements in high-resolution imaging optical fiber bundle fabrication technology, traditional photoelectric imaging system have become ;flexible; with greatly reduced volume and weight. However, traditional image quality evaluation models are limited by the coupling discrete sampling effect of fiber-optic image bundles and charge-coupled device (CCD) pixels. This limitation substantially complicates the design, optimization, assembly, and evaluation image quality of the coupled discrete sampling imaging system. Based on the transfer process of grayscale cosine distribution optical signal in the fiber-optic image bundle and CCD, a mathematical model of coupled modulation transfer function (coupled-MTF) is established. This model can be used as a basis for following studies on the convergence and periodically oscillating characteristics of the function. We also propose the concept of the average coupled-MTF, which is consistent with the definition of traditional MTF. Based on this concept, the relationships among core distance, core layer radius, and average coupled-MTF are investigated.

  2. Comparison of the convolution quadrature method and enhanced inverse FFT with application in elastodynamic boundary element method

    NASA Astrophysics Data System (ADS)

    Schanz, Martin; Ye, Wenjing; Xiao, Jinyou

    2016-04-01

    Transient problems can often be solved with transformation methods, where the inverse transformation is usually performed numerically. Here, the discrete Fourier transform in combination with the exponential window method is compared with the convolution quadrature method formulated as inverse transformation. Both are inverse Laplace transforms, which are formally identical but use different complex frequencies. A numerical study is performed, first with simple convolution integrals and, second, with a boundary element method (BEM) for elastodynamics. Essentially, when combined with the BEM, the discrete Fourier transform needs less frequency calculations, but finer mesh compared to the convolution quadrature method to obtain the same level of accuracy. If further fast methods like the fast multipole method are used to accelerate the boundary element method the convolution quadrature method is better, because the iterative solver needs much less iterations to converge. This is caused by the larger real part of the complex frequencies necessary for the calculation, which improves the conditions of system matrix.

  3. Vessel Segmentation and Blood Flow Simulation Using Level-Sets and Embedded Boundary Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deschamps, T; Schwartz, P; Trebotich, D

    In this article we address the problem of blood flow simulation in realistic vascular objects. The anatomical surfaces are extracted by means of Level-Sets methods that accurately model the complex and varying surfaces of pathological objects such as aneurysms and stenoses. The surfaces obtained are defined at the sub-pixel level where they intersect the Cartesian grid of the image domain. It is therefore straightforward to construct embedded boundary representations of these objects on the same grid, for which recent work has enabled discretization of the Navier-Stokes equations for incompressible fluids. While most classical techniques require construction of a structured meshmore » that approximates the surface in order to extrapolate a 3D finite-element gridding of the whole volume, our method directly simulates the blood-flow inside the extracted surface without losing any complicated details and without building additional grids.« less

  4. Parallel Discrete Molecular Dynamics Simulation With Speculation and In-Order Commitment*†

    PubMed Central

    Khan, Md. Ashfaquzzaman; Herbordt, Martin C.

    2011-01-01

    Discrete molecular dynamics simulation (DMD) uses simplified and discretized models enabling simulations to advance by event rather than by timestep. DMD is an instance of discrete event simulation and so is difficult to scale: even in this multi-core era, all reported DMD codes are serial. In this paper we discuss the inherent difficulties of scaling DMD and present our method of parallelizing DMD through event-based decomposition. Our method is microarchitecture inspired: speculative processing of events exposes parallelism, while in-order commitment ensures correctness. We analyze the potential of this parallelization method for shared-memory multiprocessors. Achieving scalability required extensive experimentation with scheduling and synchronization methods to mitigate serialization. The speed-up achieved for a variety of system sizes and complexities is nearly 6× on an 8-core and over 9× on a 12-core processor. We present and verify analytical models that account for the achieved performance as a function of available concurrency and architectural limitations. PMID:21822327

  5. Parallel Discrete Molecular Dynamics Simulation With Speculation and In-Order Commitment.

    PubMed

    Khan, Md Ashfaquzzaman; Herbordt, Martin C

    2011-07-20

    Discrete molecular dynamics simulation (DMD) uses simplified and discretized models enabling simulations to advance by event rather than by timestep. DMD is an instance of discrete event simulation and so is difficult to scale: even in this multi-core era, all reported DMD codes are serial. In this paper we discuss the inherent difficulties of scaling DMD and present our method of parallelizing DMD through event-based decomposition. Our method is microarchitecture inspired: speculative processing of events exposes parallelism, while in-order commitment ensures correctness. We analyze the potential of this parallelization method for shared-memory multiprocessors. Achieving scalability required extensive experimentation with scheduling and synchronization methods to mitigate serialization. The speed-up achieved for a variety of system sizes and complexities is nearly 6× on an 8-core and over 9× on a 12-core processor. We present and verify analytical models that account for the achieved performance as a function of available concurrency and architectural limitations.

  6. The Information Content of Discrete Functions and Their Application in Genetic Data Analysis.

    PubMed

    Sakhanenko, Nikita A; Kunert-Graf, James; Galas, David J

    2017-12-01

    The complex of central problems in data analysis consists of three components: (1) detecting the dependence of variables using quantitative measures, (2) defining the significance of these dependence measures, and (3) inferring the functional relationships among dependent variables. We have argued previously that an information theory approach allows separation of the detection problem from the inference of functional form problem. We approach here the third component of inferring functional forms based on information encoded in the functions. We present here a direct method for classifying the functional forms of discrete functions of three variables represented in data sets. Discrete variables are frequently encountered in data analysis, both as the result of inherently categorical variables and from the binning of continuous numerical variables into discrete alphabets of values. The fundamental question of how much information is contained in a given function is answered for these discrete functions, and their surprisingly complex relationships are illustrated. The all-important effect of noise on the inference of function classes is found to be highly heterogeneous and reveals some unexpected patterns. We apply this classification approach to an important area of biological data analysis-that of inference of genetic interactions. Genetic analysis provides a rich source of real and complex biological data analysis problems, and our general methods provide an analytical basis and tools for characterizing genetic problems and for analyzing genetic data. We illustrate the functional description and the classes of a number of common genetic interaction modes and also show how different modes vary widely in their sensitivity to noise.

  7. An efficient and secure partial image encryption for wireless multimedia sensor networks using discrete wavelet transform, chaotic maps and substitution box

    NASA Astrophysics Data System (ADS)

    Khan, Muazzam A.; Ahmad, Jawad; Javaid, Qaisar; Saqib, Nazar A.

    2017-03-01

    Wireless Sensor Networks (WSN) is widely deployed in monitoring of some physical activity and/or environmental conditions. Data gathered from WSN is transmitted via network to a central location for further processing. Numerous applications of WSN can be found in smart homes, intelligent buildings, health care, energy efficient smart grids and industrial control systems. In recent years, computer scientists has focused towards findings more applications of WSN in multimedia technologies, i.e. audio, video and digital images. Due to bulky nature of multimedia data, WSN process a large volume of multimedia data which significantly increases computational complexity and hence reduces battery time. With respect to battery life constraints, image compression in addition with secure transmission over a wide ranged sensor network is an emerging and challenging task in Wireless Multimedia Sensor Networks. Due to the open nature of the Internet, transmission of data must be secure through a process known as encryption. As a result, there is an intensive demand for such schemes that is energy efficient as well as highly secure since decades. In this paper, discrete wavelet-based partial image encryption scheme using hashing algorithm, chaotic maps and Hussain's S-Box is reported. The plaintext image is compressed via discrete wavelet transform and then the image is shuffled column-wise and row wise-wise via Piece-wise Linear Chaotic Map (PWLCM) and Nonlinear Chaotic Algorithm, respectively. To get higher security, initial conditions for PWLCM are made dependent on hash function. The permuted image is bitwise XORed with random matrix generated from Intertwining Logistic map. To enhance the security further, final ciphertext is obtained after substituting all elements with Hussain's substitution box. Experimental and statistical results confirm the strength of the anticipated scheme.

  8. Discrete Radon transform has an exact, fast inverse and generalizes to operations other than sums along lines

    PubMed Central

    Press, William H.

    2006-01-01

    Götz, Druckmüller, and, independently, Brady have defined a discrete Radon transform (DRT) that sums an image's pixel values along a set of aptly chosen discrete lines, complete in slope and intercept. The transform is fast, O(N2log N) for an N × N image; it uses only addition, not multiplication or interpolation, and it admits a fast, exact algorithm for the adjoint operation, namely backprojection. This paper shows that the transform additionally has a fast, exact (although iterative) inverse. The inverse reproduces to machine accuracy the pixel-by-pixel values of the original image from its DRT, without artifacts or a finite point-spread function. Fourier or fast Fourier transform methods are not used. The inverse can also be calculated from sampled sinograms and is well conditioned in the presence of noise. Also introduced are generalizations of the DRT that combine pixel values along lines by operations other than addition. For example, there is a fast transform that calculates median values along all discrete lines and is able to detect linear features at low signal-to-noise ratios in the presence of pointlike clutter features of arbitrarily large amplitude. PMID:17159155

  9. Discrete Radon transform has an exact, fast inverse and generalizes to operations other than sums along lines.

    PubMed

    Press, William H

    2006-12-19

    Götz, Druckmüller, and, independently, Brady have defined a discrete Radon transform (DRT) that sums an image's pixel values along a set of aptly chosen discrete lines, complete in slope and intercept. The transform is fast, O(N2log N) for an N x N image; it uses only addition, not multiplication or interpolation, and it admits a fast, exact algorithm for the adjoint operation, namely backprojection. This paper shows that the transform additionally has a fast, exact (although iterative) inverse. The inverse reproduces to machine accuracy the pixel-by-pixel values of the original image from its DRT, without artifacts or a finite point-spread function. Fourier or fast Fourier transform methods are not used. The inverse can also be calculated from sampled sinograms and is well conditioned in the presence of noise. Also introduced are generalizations of the DRT that combine pixel values along lines by operations other than addition. For example, there is a fast transform that calculates median values along all discrete lines and is able to detect linear features at low signal-to-noise ratios in the presence of pointlike clutter features of arbitrarily large amplitude.

  10. Graph cuts for curvature based image denoising.

    PubMed

    Bae, Egil; Shi, Juan; Tai, Xue-Cheng

    2011-05-01

    Minimization of total variation (TV) is a well-known method for image denoising. Recently, the relationship between TV minimization problems and binary MRF models has been much explored. This has resulted in some very efficient combinatorial optimization algorithms for the TV minimization problem in the discrete setting via graph cuts. To overcome limitations, such as staircasing effects, of the relatively simple TV model, variational models based upon higher order derivatives have been proposed. The Euler's elastica model is one such higher order model of central importance, which minimizes the curvature of all level lines in the image. Traditional numerical methods for minimizing the energy in such higher order models are complicated and computationally complex. In this paper, we will present an efficient minimization algorithm based upon graph cuts for minimizing the energy in the Euler's elastica model, by simplifying the problem to that of solving a sequence of easy graph representable problems. This sequence has connections to the gradient flow of the energy function, and converges to a minimum point. The numerical experiments show that our new approach is more effective in maintaining smooth visual results while preserving sharp features better than TV models.

  11. An extended diffraction tomography method for quantifying structural damage using numerical Green's functions.

    PubMed

    Chan, Eugene; Rose, L R Francis; Wang, Chun H

    2015-05-01

    Existing damage imaging algorithms for detecting and quantifying structural defects, particularly those based on diffraction tomography, assume far-field conditions for the scattered field data. This paper presents a major extension of diffraction tomography that can overcome this limitation and utilises a near-field multi-static data matrix as the input data. This new algorithm, which employs numerical solutions of the dynamic Green's functions, makes it possible to quantitatively image laminar damage even in complex structures for which the dynamic Green's functions are not available analytically. To validate this new method, the numerical Green's functions and the multi-static data matrix for laminar damage in flat and stiffened isotropic plates are first determined using finite element models. Next, these results are time-gated to remove boundary reflections, followed by discrete Fourier transform to obtain the amplitude and phase information for both the baseline (damage-free) and the scattered wave fields. Using these computationally generated results and experimental verification, it is shown that the new imaging algorithm is capable of accurately determining the damage geometry, size and severity for a variety of damage sizes and shapes, including multi-site damage. Some aspects of minimal sensors requirement pertinent to image quality and practical implementation are also briefly discussed. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Imaging of Posttraumatic Arthritis, Avascular Necrosis, Septic Arthritis, Complex Regional Pain Syndrome, and Cancer Mimicking Arthritis.

    PubMed

    Rupasov, Andrey; Cain, Usa; Montoya, Simone; Blickman, Johan G

    2017-09-01

    This article focuses on the imaging of 5 discrete entities with a common end result of disability: posttraumatic arthritis, a common form of secondary osteoarthritis that results from a prior insult to the joint; avascular necrosis, a disease of impaired osseous blood flow, leading to cellular death and subsequent osseous collapse; septic arthritis, an infectious process leading to destructive changes within the joint; complex regional pain syndrome, a chronic limb-confined painful condition arising after injury; and cases of cancer mimicking arthritis, in which the initial findings seem to represent arthritis, despite a more insidious cause. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. On time discretizations for the simulation of the batch settling-compression process in one dimension.

    PubMed

    Bürger, Raimund; Diehl, Stefan; Mejías, Camilo

    2016-01-01

    The main purpose of the recently introduced Bürger-Diehl simulation model for secondary settling tanks was to resolve spatial discretization problems when both hindered settling and the phenomena of compression and dispersion are included. Straightforward time integration unfortunately means long computational times. The next step in the development is to introduce and investigate time-integration methods for more efficient simulations, but where other aspects such as implementation complexity and robustness are equally considered. This is done for batch settling simulations. The key findings are partly a new time-discretization method and partly its comparison with other specially tailored and standard methods. Several advantages and disadvantages for each method are given. One conclusion is that the new linearly implicit method is easier to implement than another one (semi-implicit method), but less efficient based on two types of batch sedimentation tests.

  14. Comparison of Inoculation with the InoqulA and WASP Automated Systems with Manual Inoculation

    PubMed Central

    Croxatto, Antony; Dijkstra, Klaas; Prod'hom, Guy

    2015-01-01

    The quality of sample inoculation is critical for achieving an optimal yield of discrete colonies in both monomicrobial and polymicrobial samples to perform identification and antibiotic susceptibility testing. Consequently, we compared the performance between the InoqulA (BD Kiestra), the WASP (Copan), and manual inoculation methods. Defined mono- and polymicrobial samples of 4 bacterial species and cloudy urine specimens were inoculated on chromogenic agar by the InoqulA, the WASP, and manual methods. Images taken with ImagA (BD Kiestra) were analyzed with the VisionLab version 3.43 image analysis software to assess the quality of growth and to prevent subjective interpretation of the data. A 3- to 10-fold higher yield of discrete colonies was observed following automated inoculation with both the InoqulA and WASP systems than that with manual inoculation. The difference in performance between automated and manual inoculation was mainly observed at concentrations of >106 bacteria/ml. Inoculation with the InoqulA system allowed us to obtain significantly more discrete colonies than the WASP system at concentrations of >107 bacteria/ml. However, the level of difference observed was bacterial species dependent. Discrete colonies of bacteria present in 100- to 1,000-fold lower concentrations than the most concentrated populations in defined polymicrobial samples were not reproducibly recovered, even with the automated systems. The analysis of cloudy urine specimens showed that InoqulA inoculation provided a statistically significantly higher number of discrete colonies than that with WASP and manual inoculation. Consequently, the automated InoqulA inoculation greatly decreased the requirement for bacterial subculture and thus resulted in a significant reduction in the time to results, laboratory workload, and laboratory costs. PMID:25972424

  15. An Assessment of Iterative Reconstruction Methods for Sparse Ultrasound Imaging

    PubMed Central

    Valente, Solivan A.; Zibetti, Marcelo V. W.; Pipa, Daniel R.; Maia, Joaquim M.; Schneider, Fabio K.

    2017-01-01

    Ultrasonic image reconstruction using inverse problems has recently appeared as an alternative to enhance ultrasound imaging over beamforming methods. This approach depends on the accuracy of the acquisition model used to represent transducers, reflectivity, and medium physics. Iterative methods, well known in general sparse signal reconstruction, are also suited for imaging. In this paper, a discrete acquisition model is assessed by solving a linear system of equations by an ℓ1-regularized least-squares minimization, where the solution sparsity may be adjusted as desired. The paper surveys 11 variants of four well-known algorithms for sparse reconstruction, and assesses their optimization parameters with the goal of finding the best approach for iterative ultrasound imaging. The strategy for the model evaluation consists of using two distinct datasets. We first generate data from a synthetic phantom that mimics real targets inside a professional ultrasound phantom device. This dataset is contaminated with Gaussian noise with an estimated SNR, and all methods are assessed by their resulting images and performances. The model and methods are then assessed with real data collected by a research ultrasound platform when scanning the same phantom device, and results are compared with beamforming. A distinct real dataset is finally used to further validate the proposed modeling. Although high computational effort is required by iterative methods, results show that the discrete model may lead to images closer to ground-truth than traditional beamforming. However, computing capabilities of current platforms need to evolve before frame rates currently delivered by ultrasound equipments are achievable. PMID:28282862

  16. Influence of muscle-tendon complex geometrical parameters on modeling passive stretch behavior with the Discrete Element Method.

    PubMed

    Roux, A; Laporte, S; Lecompte, J; Gras, L-L; Iordanoff, I

    2016-01-25

    The muscle-tendon complex (MTC) is a multi-scale, anisotropic, non-homogeneous structure. It is composed of fascicles, gathered together in a conjunctive aponeurosis. Fibers are oriented into the MTC with a pennation angle. Many MTC models use the Finite Element Method (FEM) to simulate the behavior of the MTC as a hyper-viscoelastic material. The Discrete Element Method (DEM) could be adapted to model fibrous materials, such as the MTC. DEM could capture the complex behavior of a material with a simple discretization scheme and help in understanding the influence of the orientation of fibers on the MTC׳s behavior. The aims of this study were to model the MTC in DEM at the macroscopic scale and to obtain the force/displacement curve during a non-destructive passive tensile test. Another aim was to highlight the influence of the geometrical parameters of the MTC on the global mechanical behavior. A geometrical construction of the MTC was done using discrete element linked by springs. Young׳s modulus values of the MTC׳s components were retrieved from the literature to model the microscopic stiffness of each spring. Alignment and re-orientation of all of the muscle׳s fibers with the tensile axis were observed numerically. The hyper-elastic behavior of the MTC was pointed out. The structure׳s effects, added to the geometrical parameters, highlight the MTC׳s mechanical behavior. It is also highlighted by the heterogeneity of the strain of the MTC׳s components. DEM seems to be a promising method to model the hyper-elastic macroscopic behavior of the MTC with simple elastic microscopic elements. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Approaches to quantitating the results of differentially dyed cottons

    USDA-ARS?s Scientific Manuscript database

    The differential dyeing (DD) method has served as a subjective method for visually determining immature cotton fibers. In an attempt to quantitate the results of the differential dyeing method, and thus offer an efficient means of elucidating cotton maturity without visual discretion, image analysi...

  18. AN IMMERSED BOUNDARY METHOD FOR COMPLEX INCOMPRESSIBLE FLOWS

    EPA Science Inventory

    An immersed boundary method for time-dependant, three- dimensional, incompressible flows is presented in this paper. The incompressible Navier-Stokes equations are discretized using a low-diffusion flux splitting method for the inviscid fluxes and a second order central differenc...

  19. Three-Class Mammogram Classification Based on Descriptive CNN Features

    PubMed Central

    Zhang, Qianni; Jadoon, Adeel

    2017-01-01

    In this paper, a novel classification technique for large data set of mammograms using a deep learning method is proposed. The proposed model targets a three-class classification study (normal, malignant, and benign cases). In our model we have presented two methods, namely, convolutional neural network-discrete wavelet (CNN-DW) and convolutional neural network-curvelet transform (CNN-CT). An augmented data set is generated by using mammogram patches. To enhance the contrast of mammogram images, the data set is filtered by contrast limited adaptive histogram equalization (CLAHE). In the CNN-DW method, enhanced mammogram images are decomposed as its four subbands by means of two-dimensional discrete wavelet transform (2D-DWT), while in the second method discrete curvelet transform (DCT) is used. In both methods, dense scale invariant feature (DSIFT) for all subbands is extracted. Input data matrix containing these subband features of all the mammogram patches is created that is processed as input to convolutional neural network (CNN). Softmax layer and support vector machine (SVM) layer are used to train CNN for classification. Proposed methods have been compared with existing methods in terms of accuracy rate, error rate, and various validation assessment measures. CNN-DW and CNN-CT have achieved accuracy rate of 81.83% and 83.74%, respectively. Simulation results clearly validate the significance and impact of our proposed model as compared to other well-known existing techniques. PMID:28191461

  20. Three-Class Mammogram Classification Based on Descriptive CNN Features.

    PubMed

    Jadoon, M Mohsin; Zhang, Qianni; Haq, Ihsan Ul; Butt, Sharjeel; Jadoon, Adeel

    2017-01-01

    In this paper, a novel classification technique for large data set of mammograms using a deep learning method is proposed. The proposed model targets a three-class classification study (normal, malignant, and benign cases). In our model we have presented two methods, namely, convolutional neural network-discrete wavelet (CNN-DW) and convolutional neural network-curvelet transform (CNN-CT). An augmented data set is generated by using mammogram patches. To enhance the contrast of mammogram images, the data set is filtered by contrast limited adaptive histogram equalization (CLAHE). In the CNN-DW method, enhanced mammogram images are decomposed as its four subbands by means of two-dimensional discrete wavelet transform (2D-DWT), while in the second method discrete curvelet transform (DCT) is used. In both methods, dense scale invariant feature (DSIFT) for all subbands is extracted. Input data matrix containing these subband features of all the mammogram patches is created that is processed as input to convolutional neural network (CNN). Softmax layer and support vector machine (SVM) layer are used to train CNN for classification. Proposed methods have been compared with existing methods in terms of accuracy rate, error rate, and various validation assessment measures. CNN-DW and CNN-CT have achieved accuracy rate of 81.83% and 83.74%, respectively. Simulation results clearly validate the significance and impact of our proposed model as compared to other well-known existing techniques.

  1. Sub-pixel flood inundation mapping from multispectral remotely sensed images based on discrete particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Li, Linyi; Chen, Yun; Yu, Xin; Liu, Rui; Huang, Chang

    2015-03-01

    The study of flood inundation is significant to human life and social economy. Remote sensing technology has provided an effective way to study the spatial and temporal characteristics of inundation. Remotely sensed images with high temporal resolutions are widely used in mapping inundation. However, mixed pixels do exist due to their relatively low spatial resolutions. One of the most popular approaches to resolve this issue is sub-pixel mapping. In this paper, a novel discrete particle swarm optimization (DPSO) based sub-pixel flood inundation mapping (DPSO-SFIM) method is proposed to achieve an improved accuracy in mapping inundation at a sub-pixel scale. The evaluation criterion for sub-pixel inundation mapping is formulated. The DPSO-SFIM algorithm is developed, including particle discrete encoding, fitness function designing and swarm search strategy. The accuracy of DPSO-SFIM in mapping inundation at a sub-pixel scale was evaluated using Landsat ETM + images from study areas in Australia and China. The results show that DPSO-SFIM consistently outperformed the four traditional SFIM methods in these study areas. A sensitivity analysis of DPSO-SFIM was also carried out to evaluate its performances. It is hoped that the results of this study will enhance the application of medium-low spatial resolution images in inundation detection and mapping, and thereby support the ecological and environmental studies of river basins.

  2. Contribution of 3D inversion of Electrical Resistivity Tomography data applied to volcanic structures

    NASA Astrophysics Data System (ADS)

    Portal, Angélie; Fargier, Yannick; Lénat, Jean-François; Labazuy, Philippe

    2016-04-01

    The electrical resistivity tomography (ERT) method, initially developed for environmental and engineering exploration, is now commonly used for geological structures imaging. Such structures can present complex characteristics that conventional 2D inversion processes cannot perfectly integrate. Here we present a new 3D inversion algorithm named EResI, firstly developed for levee investigation, and presently applied to the study of a complex lava dome (the Puy de Dôme volcano, France). EResI algorithm is based on a conventional regularized Gauss-Newton inversion scheme and a 3D non-structured discretization of the model (double grid method based on tetrahedrons). This discretization allows to accurately model the topography of investigated structure (without a mesh deformation procedure) and also permits a precise location of the electrodes. Moreover, we demonstrate that a complete 3D unstructured discretization limits the number of inversion cells and is better adapted to the resolution capacity of tomography than a structured discretization. This study shows that a 3D inversion with a non-structured parametrization has some advantages compared to classical 2D inversions. The first advantage comes from the fact that a 2D inversion leads to artefacts due to 3D effects (3D topography, 3D internal resistivity). The second advantage comes from the fact that the capacity to experimentally align electrodes along an axis (for 2D surveys) depends on the constrains on the field (topography...). In this case, a 2D assumption induced by 2.5D inversion software prevents its capacity to model electrodes outside this axis leading to artefacts in the inversion result. The last limitation comes from the use of mesh deformation techniques used to accurately model the topography in 2D softwares. This technique used for structured discretization (Res2dinv) is prohibed for strong topography (>60 %) and leads to a small computational errors. A wide geophysical survey was carried out on the Puy de Dôme volcano resulting in 12 ERT profiles with approximatively 800 electrodes. We performed two processing stages by inverting independently each profiles in 2D (RES2DINV software) and the complete data set in 3D (EResI). The comparison of the 3D inversion results with those obtained through a conventional 2D inversion process underlined that EResI allows to accurately take into account the random electrodes positioning and reduce out-line artefacts into the inversion models due to positioning errors out of the profile axis. This comparison also highlighted the advantages to integrate several ERT lines to compute the 3D models of complex volcanic structures. Finally, the resulting 3D model allows a better interpretation of the Puy de Dome Volcano.

  3. Monitoring in real-time focal adhesion protein dynamics in response to a discrete mechanical stimulus

    NASA Astrophysics Data System (ADS)

    von Bilderling, Catalina; Caldarola, Martín; Masip, Martín E.; Bragas, Andrea V.; Pietrasanta, Lía I.

    2017-01-01

    The adhesion of cells to the extracellular matrix is a hierarchical, force-dependent, multistage process that evolves at several temporal scales. An understanding of this complex process requires a precise measurement of forces and its correlation with protein responses in living cells. We present a method to quantitatively assess live cell responses to a local and specific mechanical stimulus. Our approach combines atomic force microscopy with fluorescence imaging. Using this approach, we evaluated the recruitment of adhesion proteins such as vinculin, focal adhesion kinase, paxillin, and zyxin triggered by applying forces in the nN regime to live cells. We observed in real time the development of nascent adhesion sites, evident from the accumulation of early adhesion proteins at the position where the force was applied. We show that the method can be used to quantify the recruitment characteristic times for adhesion proteins in the formation of focal complexes. We also found a spatial remodeling of the mature focal adhesion protein zyxin as a function of the applied force. Our approach allows the study of a variety of complex biological processes involved in cellular mechanotransduction.

  4. Monitoring in real-time focal adhesion protein dynamics in response to a discrete mechanical stimulus.

    PubMed

    von Bilderling, Catalina; Caldarola, Martín; Masip, Martín E; Bragas, Andrea V; Pietrasanta, Lía I

    2017-01-01

    The adhesion of cells to the extracellular matrix is a hierarchical, force-dependent, multistage process that evolves at several temporal scales. An understanding of this complex process requires a precise measurement of forces and its correlation with protein responses in living cells. We present a method to quantitatively assess live cell responses to a local and specific mechanical stimulus. Our approach combines atomic force microscopy with fluorescence imaging. Using this approach, we evaluated the recruitment of adhesion proteins such as vinculin, focal adhesion kinase, paxillin, and zyxin triggered by applying forces in the nN regime to live cells. We observed in real time the development of nascent adhesion sites, evident from the accumulation of early adhesion proteins at the position where the force was applied. We show that the method can be used to quantify the recruitment characteristic times for adhesion proteins in the formation of focal complexes. We also found a spatial remodeling of the mature focal adhesion protein zyxin as a function of the applied force. Our approach allows the study of a variety of complex biological processes involved in cellular mechanotransduction.

  5. A scalable block-preconditioning strategy for divergence-conforming B-spline discretizations of the Stokes problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cortes, Adriano M.; Dalcin, Lisandro; Sarmiento, Adel F.

    The recently introduced divergence-conforming B-spline discretizations allow the construction of smooth discrete velocity–pressure pairs for viscous incompressible flows that are at the same time inf–sup stable and pointwise divergence-free. When applied to the discretized Stokes problem, these spaces generate a symmetric and indefinite saddle-point linear system. The iterative method of choice to solve such system is the Generalized Minimum Residual Method. This method lacks robustness, and one remedy is to use preconditioners. For linear systems of saddle-point type, a large family of preconditioners can be obtained by using a block factorization of the system. In this paper, we show howmore » the nesting of “black-box” solvers and preconditioners can be put together in a block triangular strategy to build a scalable block preconditioner for the Stokes system discretized by divergence-conforming B-splines. Lastly, besides the known cavity flow problem, we used for benchmark flows defined on complex geometries: an eccentric annulus and hollow torus of an eccentric annular cross-section.« less

  6. A scalable block-preconditioning strategy for divergence-conforming B-spline discretizations of the Stokes problem

    DOE PAGES

    Cortes, Adriano M.; Dalcin, Lisandro; Sarmiento, Adel F.; ...

    2016-10-19

    The recently introduced divergence-conforming B-spline discretizations allow the construction of smooth discrete velocity–pressure pairs for viscous incompressible flows that are at the same time inf–sup stable and pointwise divergence-free. When applied to the discretized Stokes problem, these spaces generate a symmetric and indefinite saddle-point linear system. The iterative method of choice to solve such system is the Generalized Minimum Residual Method. This method lacks robustness, and one remedy is to use preconditioners. For linear systems of saddle-point type, a large family of preconditioners can be obtained by using a block factorization of the system. In this paper, we show howmore » the nesting of “black-box” solvers and preconditioners can be put together in a block triangular strategy to build a scalable block preconditioner for the Stokes system discretized by divergence-conforming B-splines. Lastly, besides the known cavity flow problem, we used for benchmark flows defined on complex geometries: an eccentric annulus and hollow torus of an eccentric annular cross-section.« less

  7. Discrete Sparse Coding.

    PubMed

    Exarchakis, Georgios; Lücke, Jörg

    2017-11-01

    Sparse coding algorithms with continuous latent variables have been the subject of a large number of studies. However, discrete latent spaces for sparse coding have been largely ignored. In this work, we study sparse coding with latents described by discrete instead of continuous prior distributions. We consider the general case in which the latents (while being sparse) can take on any value of a finite set of possible values and in which we learn the prior probability of any value from data. This approach can be applied to any data generated by discrete causes, and it can be applied as an approximation of continuous causes. As the prior probabilities are learned, the approach then allows for estimating the prior shape without assuming specific functional forms. To efficiently train the parameters of our probabilistic generative model, we apply a truncated expectation-maximization approach (expectation truncation) that we modify to work with a general discrete prior. We evaluate the performance of the algorithm by applying it to a variety of tasks: (1) we use artificial data to verify that the algorithm can recover the generating parameters from a random initialization, (2) use image patches of natural images and discuss the role of the prior for the extraction of image components, (3) use extracellular recordings of neurons to present a novel method of analysis for spiking neurons that includes an intuitive discretization strategy, and (4) apply the algorithm on the task of encoding audio waveforms of human speech. The diverse set of numerical experiments presented in this letter suggests that discrete sparse coding algorithms can scale efficiently to work with realistic data sets and provide novel statistical quantities to describe the structure of the data.

  8. Stencil computations for PDE-based applications with examples from DUNE and hypre

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engwer, C.; Falgout, R. D.; Yang, U. M.

    Here, stencils are commonly used to implement efficient on–the–fly computations of linear operators arising from partial differential equations. At the same time the term “stencil” is not fully defined and can be interpreted differently depending on the application domain and the background of the software developers. Common features in stencil codes are the preservation of the structure given by the discretization of the partial differential equation and the benefit of minimal data storage. We discuss stencil concepts of different complexity, show how they are used in modern software packages like hypre and DUNE, and discuss recent efforts to extend themore » software to enable stencil computations of more complex problems and methods such as inf–sup–stable Stokes discretizations and mixed finite element discretizations.« less

  9. Stencil computations for PDE-based applications with examples from DUNE and hypre

    DOE PAGES

    Engwer, C.; Falgout, R. D.; Yang, U. M.

    2017-02-24

    Here, stencils are commonly used to implement efficient on–the–fly computations of linear operators arising from partial differential equations. At the same time the term “stencil” is not fully defined and can be interpreted differently depending on the application domain and the background of the software developers. Common features in stencil codes are the preservation of the structure given by the discretization of the partial differential equation and the benefit of minimal data storage. We discuss stencil concepts of different complexity, show how they are used in modern software packages like hypre and DUNE, and discuss recent efforts to extend themore » software to enable stencil computations of more complex problems and methods such as inf–sup–stable Stokes discretizations and mixed finite element discretizations.« less

  10. Image segmentation-based robust feature extraction for color image watermarking

    NASA Astrophysics Data System (ADS)

    Li, Mianjie; Deng, Zeyu; Yuan, Xiaochen

    2018-04-01

    This paper proposes a local digital image watermarking method based on Robust Feature Extraction. The segmentation is achieved by Simple Linear Iterative Clustering (SLIC) based on which an Image Segmentation-based Robust Feature Extraction (ISRFE) method is proposed for feature extraction. Our method can adaptively extract feature regions from the blocks segmented by SLIC. This novel method can extract the most robust feature region in every segmented image. Each feature region is decomposed into low-frequency domain and high-frequency domain by Discrete Cosine Transform (DCT). Watermark images are then embedded into the coefficients in the low-frequency domain. The Distortion-Compensated Dither Modulation (DC-DM) algorithm is chosen as the quantization method for embedding. The experimental results indicate that the method has good performance under various attacks. Furthermore, the proposed method can obtain a trade-off between high robustness and good image quality.

  11. General method to find the attractors of discrete dynamic models of biological systems.

    PubMed

    Gan, Xiao; Albert, Réka

    2018-04-01

    Analyzing the long-term behaviors (attractors) of dynamic models of biological networks can provide valuable insight. We propose a general method that can find the attractors of multilevel discrete dynamical systems by extending a method that finds the attractors of a Boolean network model. The previous method is based on finding stable motifs, subgraphs whose nodes' states can stabilize on their own. We extend the framework from binary states to any finite discrete levels by creating a virtual node for each level of a multilevel node, and describing each virtual node with a quasi-Boolean function. We then create an expanded representation of the multilevel network, find multilevel stable motifs and oscillating motifs, and identify attractors by successive network reduction. In this way, we find both fixed point attractors and complex attractors. We implemented an algorithm, which we test and validate on representative synthetic networks and on published multilevel models of biological networks. Despite its primary motivation to analyze biological networks, our motif-based method is general and can be applied to any finite discrete dynamical system.

  12. General method to find the attractors of discrete dynamic models of biological systems

    NASA Astrophysics Data System (ADS)

    Gan, Xiao; Albert, Réka

    2018-04-01

    Analyzing the long-term behaviors (attractors) of dynamic models of biological networks can provide valuable insight. We propose a general method that can find the attractors of multilevel discrete dynamical systems by extending a method that finds the attractors of a Boolean network model. The previous method is based on finding stable motifs, subgraphs whose nodes' states can stabilize on their own. We extend the framework from binary states to any finite discrete levels by creating a virtual node for each level of a multilevel node, and describing each virtual node with a quasi-Boolean function. We then create an expanded representation of the multilevel network, find multilevel stable motifs and oscillating motifs, and identify attractors by successive network reduction. In this way, we find both fixed point attractors and complex attractors. We implemented an algorithm, which we test and validate on representative synthetic networks and on published multilevel models of biological networks. Despite its primary motivation to analyze biological networks, our motif-based method is general and can be applied to any finite discrete dynamical system.

  13. Discrete choice experiments of pharmacy services: a systematic review.

    PubMed

    Vass, Caroline; Gray, Ewan; Payne, Katherine

    2016-06-01

    Background Two previous systematic reviews have summarised the application of discrete choice experiments to value preferences for pharmacy services. These reviews identified a total of twelve studies and described how discrete choice experiments have been used to value pharmacy services but did not describe or discuss the application of methods used in the design or analysis. Aims (1) To update the most recent systematic review and critically appraise current discrete choice experiments of pharmacy services in line with published reporting criteria and; (2) To provide an overview of key methodological developments in the design and analysis of discrete choice experiments. Methods The review used a comprehensive strategy to identify eligible studies (published between 1990 and 2015) by searching electronic databases for key terms related to discrete choice and best-worst scaling (BWS) experiments. All healthcare choice experiments were then hand-searched for key terms relating to pharmacy. Data were extracted using a published checklist. Results A total of 17 discrete choice experiments eliciting preferences for pharmacy services were identified for inclusion in the review. No BWS studies were identified. The studies elicited preferences from a variety of populations (pharmacists, patients, students) for a range of pharmacy services. Most studies were from a United Kingdom setting, although examples from Europe, Australia and North America were also identified. Discrete choice experiments for pharmacy services tended to include more attributes than non-pharmacy choice experiments. Few studies reported the use of qualitative research methods in the design and interpretation of the experiments (n = 9) or use of new methods of analysis to identify and quantify preference and scale heterogeneity (n = 4). No studies reported the use of Bayesian methods in their experimental design. Conclusion Incorporating more sophisticated methods in the design of pharmacy-related discrete choice experiments could help researchers produce more efficient experiments which are better suited to valuing complex pharmacy services. Pharmacy-related discrete choice experiments could also benefit from more sophisticated analytical techniques such as investigations into scale and preference heterogeneity. Employing these sophisticated methods for both design and analysis could extend the usefulness of discrete choice experiments to inform health and pharmacy policy.

  14. Development of a classification method for a crack on a pavement surface images using machine learning

    NASA Astrophysics Data System (ADS)

    Hizukuri, Akiyoshi; Nagata, Takeshi

    2017-03-01

    The purpose of this study is to develop a classification method for a crack on a pavement surface image using machine learning to reduce a maintenance fee. Our database consists of 3500 pavement surface images. This includes 800 crack and 2700 normal pavement surface images. The pavement surface images first are decomposed into several sub-images using a discrete wavelet transform (DWT) decomposition. We then calculate the wavelet sub-band histogram from each several sub-images at each level. The support vector machine (SVM) with computed wavelet sub-band histogram is employed for distinguishing between a crack and normal pavement surface images. The accuracies of the proposed classification method are 85.3% for crack and 84.4% for normal pavement images. The proposed classification method achieved high performance. Therefore, the proposed method would be useful in maintenance inspection.

  15. Automated macromolecular crystal detection system and method

    DOEpatents

    Christian, Allen T [Tracy, CA; Segelke, Brent [San Ramon, CA; Rupp, Bernard [Livermore, CA; Toppani, Dominique [Fontainebleau, FR

    2007-06-05

    An automated macromolecular method and system for detecting crystals in two-dimensional images, such as light microscopy images obtained from an array of crystallization screens. Edges are detected from the images by identifying local maxima of a phase congruency-based function associated with each image. The detected edges are segmented into discrete line segments, which are subsequently geometrically evaluated with respect to each other to identify any crystal-like qualities such as, for example, parallel lines, facing each other, similarity in length, and relative proximity. And from the evaluation a determination is made as to whether crystals are present in each image.

  16. To Issue of Mathematical Management Methods Applied for Investment-Building Complex under Conditions of Economic Crisis

    NASA Astrophysics Data System (ADS)

    Novikova, V.; Nikolaeva, O.

    2017-11-01

    In the article the authors consider a cognitive management method of the investment-building complex in the crisis conditions. The factors influencing the choice of an investment strategy are studied, the basic lines of the activity in the field of crisis-management from a position of mathematical modelling are defined. The general approach to decision-making on investment in real assets on the basis of the discrete systems based on the optimum control theory is offered. With the use of a discrete maximum principle the task in view of the decision is found. The numerical algorithm to define the optimum control is formulated by investments. Analytical decisions for the case of constant profitability of the basic means are obtained.

  17. Multisource Data Classification Using A Hybrid Semi-supervised Learning Scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vatsavai, Raju; Bhaduri, Budhendra L; Shekhar, Shashi

    2009-01-01

    In many practical situations thematic classes can not be discriminated by spectral measurements alone. Often one needs additional features such as population density, road density, wetlands, elevation, soil types, etc. which are discrete attributes. On the other hand remote sensing image features are continuous attributes. Finding a suitable statistical model and estimation of parameters is a challenging task in multisource (e.g., discrete and continuous attributes) data classification. In this paper we present a semi-supervised learning method by assuming that the samples were generated by a mixture model, where each component could be either a continuous or discrete distribution. Overall classificationmore » accuracy of the proposed method is improved by 12% in our initial experiments.« less

  18. Frequency division multiplexed multi-color fluorescence microscope system

    NASA Astrophysics Data System (ADS)

    Le, Vu Nam; Yang, Huai Dong; Zhang, Si Chun; Zhang, Xin Rong; Jin, Guo Fan

    2017-10-01

    Grayscale camera can only obtain gray scale image of object, while the multicolor imaging technology can obtain the color information to distinguish the sample structures which have the same shapes but in different colors. In fluorescence microscopy, the current method of multicolor imaging are flawed. Problem of these method is affecting the efficiency of fluorescence imaging, reducing the sampling rate of CCD etc. In this paper, we propose a novel multiple color fluorescence microscopy imaging method which based on the Frequency division multiplexing (FDM) technology, by modulating the excitation lights and demodulating the fluorescence signal in frequency domain. This method uses periodic functions with different frequency to modulate amplitude of each excitation lights, and then combine these beams for illumination in a fluorescence microscopy imaging system. The imaging system will detect a multicolor fluorescence image by a grayscale camera. During the data processing, the signal obtained by each pixel of the camera will be processed with discrete Fourier transform, decomposed by color in the frequency domain and then used inverse discrete Fourier transform. After using this process for signals from all of the pixels, monochrome images of each color on the image plane can be obtained and multicolor image is also acquired. Based on this method, this paper has constructed and set up a two-color fluorescence microscope system with two excitation wavelengths of 488 nm and 639 nm. By using this system to observe the linearly movement of two kinds of fluorescent microspheres, after the data processing, we obtain a two-color fluorescence dynamic video which is consistent with the original image. This experiment shows that the dynamic phenomenon of multicolor fluorescent biological samples can be generally observed by this method. Compared with the current methods, this method can obtain the image signals of each color at the same time, and the color video's frame rate is consistent with the frame rate of the camera. The optical system is simpler and does not need extra color separation element. In addition, this method has a good filtering effect on the ambient light or other light signals which are not affected by the modulation process.

  19. Motion representation of the long fingers: a proposal for the definitions of new anatomical frames.

    PubMed

    Coupier, Jérôme; Moiseev, Fédor; Feipel, Véronique; Rooze, Marcel; Van Sint Jan, Serge

    2014-04-11

    Despite the availability of the International Society of Biomechanics (ISB) recommendations for the orientation of anatomical frames, no consensus exists about motion representations related to finger kinematics. This paper proposes novel anatomical frames for motion representation of the phalangeal segments of the long fingers. A three-dimensional model of a human forefinger was acquired from a non-pathological fresh-frozen hand. Medical imaging was used to collect phalangeal discrete positions. Data processing was performed using a customized software interface ("lhpFusionBox") to create a specimen-specific model and to reconstruct the discrete motion path. Five examiners virtually palpated two sets of landmarks. These markers were then used to build anatomical frames following two methods: a reference method following ISB recommendations and a newly-developed method based on the mean helical axis (HA). Motion representations were obtained and compared between examiners. Virtual palpation precision was around 1mm, which is comparable to results from the literature. The comparison of the two methods showed that the helical axis method seemed more reproducible between examiners especially for secondary, or accessory, motions. Computed Root Mean Square distances comparing methods showed that the ISB method displayed a variability 10 times higher than the HA method. The HA method seems to be suitable for finger motion representation using discrete positions from medical imaging. Further investigations are required before being able to use the methodology with continuous tracking of markers set on the subject's hand. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. A Possible Approach to Inclusion of Space and Time in Frame Fields of Quantum Representations of Real and Complex Numbers

    DOE PAGES

    Benioff, Paul

    2009-01-01

    Tmore » his work is based on the field of reference frames based on quantum representations of real and complex numbers described in other work. Here frame domains are expanded to include space and time lattices. Strings of qukits are described as hybrid systems as they are both mathematical and physical systems. As mathematical systems they represent numbers. As physical systems in each frame the strings have a discrete Schrodinger dynamics on the lattices. he frame field has an iterative structure such that the contents of a stage j frame have images in a stage j - 1 (parent) frame. A discussion of parent frame images includes the proposal that points of stage j frame lattices have images as hybrid systems in parent frames. he resulting association of energy with images of lattice point locations, as hybrid systems states, is discussed. Representations and images of other physical systems in the different frames are also described.« less

  1. Dynamics and Control of Flexible Space Vehicles

    NASA Technical Reports Server (NTRS)

    Likins, P. W.

    1970-01-01

    The purpose of this report is twofold: (1) to survey the established analytic procedures for the simulation of controlled flexible space vehicles, and (2) to develop in detail methods that employ a combination of discrete and distributed ("modal") coordinates, i.e., the hybrid-coordinate methods. Analytic procedures are described in three categories: (1) discrete-coordinate methods, (2) hybrid-coordinate methods, and (3) vehicle normal-coordinate methods. Each of these approaches is described and analyzed for its advantages and disadvantages, and each is found to have an area of applicability. The hybrid-coordinate method combines the efficiency of the vehicle normal-coordinate method with the versatility of the discrete-coordinate method, and appears to have the widest range of practical application. The results in this report have practical utility in two areas: (1) complex digital computer simulation of flexible space vehicles of arbitrary configuration subject to realistic control laws, and (2) preliminary control system design based on transfer functions for linearized models of dynamics and control laws.

  2. Discrete Biogeography Based Optimization for Feature Selection in Molecular Signatures.

    PubMed

    Liu, Bo; Tian, Meihong; Zhang, Chunhua; Li, Xiangtao

    2015-04-01

    Biomarker discovery from high-dimensional data is a complex task in the development of efficient cancer diagnoses and classification. However, these data are usually redundant and noisy, and only a subset of them present distinct profiles for different classes of samples. Thus, selecting high discriminative genes from gene expression data has become increasingly interesting in the field of bioinformatics. In this paper, a discrete biogeography based optimization is proposed to select the good subset of informative gene relevant to the classification. In the proposed algorithm, firstly, the fisher-markov selector is used to choose fixed number of gene data. Secondly, to make biogeography based optimization suitable for the feature selection problem; discrete migration model and discrete mutation model are proposed to balance the exploration and exploitation ability. Then, discrete biogeography based optimization, as we called DBBO, is proposed by integrating discrete migration model and discrete mutation model. Finally, the DBBO method is used for feature selection, and three classifiers are used as the classifier with the 10 fold cross-validation method. In order to show the effective and efficiency of the algorithm, the proposed algorithm is tested on four breast cancer dataset benchmarks. Comparison with genetic algorithm, particle swarm optimization, differential evolution algorithm and hybrid biogeography based optimization, experimental results demonstrate that the proposed method is better or at least comparable with previous method from literature when considering the quality of the solutions obtained. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Stability of radiomic features in CT perfusion maps

    NASA Astrophysics Data System (ADS)

    Bogowicz, M.; Riesterer, O.; Bundschuh, R. A.; Veit-Haibach, P.; Hüllner, M.; Studer, G.; Stieb, S.; Glatz, S.; Pruschy, M.; Guckenberger, M.; Tanadini-Lang, S.

    2016-12-01

    This study aimed to identify a set of stable radiomic parameters in CT perfusion (CTP) maps with respect to CTP calculation factors and image discretization, as an input for future prognostic models for local tumor response to chemo-radiotherapy. Pre-treatment CTP images of eleven patients with oropharyngeal carcinoma and eleven patients with non-small cell lung cancer (NSCLC) were analyzed. 315 radiomic parameters were studied per perfusion map (blood volume, blood flow and mean transit time). Radiomics robustness was investigated regarding the potentially standardizable (image discretization method, Hounsfield unit (HU) threshold, voxel size and temporal resolution) and non-standardizable (artery contouring and noise threshold) perfusion calculation factors using the intraclass correlation (ICC). To gain added value for our model radiomic parameters correlated with tumor volume, a well-known predictive factor for local tumor response to chemo-radiotherapy, were excluded from the analysis. The remaining stable radiomic parameters were grouped according to inter-parameter Spearman correlations and for each group the parameter with the highest ICC was included in the final set. The acceptance level was 0.9 and 0.7 for the ICC and correlation, respectively. The image discretization method using fixed number of bins or fixed intervals gave a similar number of stable radiomic parameters (around 40%). The potentially standardizable factors introduced more variability into radiomic parameters than the non-standardizable ones with 56-98% and 43-58% instability rates, respectively. The highest variability was observed for voxel size (instability rate  >97% for both patient cohorts). Without standardization of CTP calculation factors none of the studied radiomic parameters were stable. After standardization with respect to non-standardizable factors ten radiomic parameters were stable for both patient cohorts after correction for inter-parameter correlations. Voxel size, image discretization, HU threshold and temporal resolution have to be standardized to build a reliable predictive model based on CTP radiomics analysis.

  4. Magnetic resonance imaging findings of central nervous system in lysosomal storage diseases: A pictorial review.

    PubMed

    Fagan, Nathan; Alexander, Allen; Irani, Neville; Saade, Charbel; Naffaa, Lena

    2017-06-01

    Lysosomal storage diseases (LSD) are a complex group of genetic disorders that are a result of inborn errors of metabolism. These errors result in a variety of metabolic dysfunction and build-up certain molecules within the tissues of the central nervous system (CNS). Although, they have discrete enzymatic deficiencies, symptomology and CNS imaging findings can overlap with each other, which can become challenging to radiologists. The purpose of this paper is to review the most common CNS imaging findings in LSD in order to familiarize the radiologist with their imaging findings and help narrow down the differential diagnosis. © 2016 The Royal Australian and New Zealand College of Radiologists.

  5. Experimental phase synchronization detection in non-phase coherent chaotic systems by using the discrete complex wavelet approach

    NASA Astrophysics Data System (ADS)

    Ferreira, Maria Teodora; Follmann, Rosangela; Domingues, Margarete O.; Macau, Elbert E. N.; Kiss, István Z.

    2017-08-01

    Phase synchronization may emerge from mutually interacting non-linear oscillators, even under weak coupling, when phase differences are bounded, while amplitudes remain uncorrelated. However, the detection of this phenomenon can be a challenging problem to tackle. In this work, we apply the Discrete Complex Wavelet Approach (DCWA) for phase assignment, considering signals from coupled chaotic systems and experimental data. The DCWA is based on the Dual-Tree Complex Wavelet Transform (DT-CWT), which is a discrete transformation. Due to its multi-scale properties in the context of phase characterization, it is possible to obtain very good results from scalar time series, even with non-phase-coherent chaotic systems without state space reconstruction or pre-processing. The method correctly predicts the phase synchronization for a chemical experiment with three locally coupled, non-phase-coherent chaotic processes. The impact of different time-scales is demonstrated on the synchronization process that outlines the advantages of DCWA for analysis of experimental data.

  6. Radiological and endoscopic imaging methods in the management of cystic pancreatic neoplasms.

    PubMed

    Aslan, Ahmet; Inan, Ibrahim; Orman, Süleyman; Aslan, Mine; Acar, Murat

    2017-01-01

    The management of cystic pancreatic neoplasm (CPN) is a clinical dilemma because of its clinical presentations and malignant potential. Surgery is the best treatment choice ; however, pancreatic surgery still has high complication rates, even in experienced centers. Imaging methods have a definitive role in the management of CPN and computed tomography, magnetic resonance imaging, and endoscopic ultrasonography are the preferred methods since they can reveal the suspicious features for malignancy. Therefore, radiologists, gastroenterologists, endoscopists, and surgeons should be aware of the common features of CPN, its discrete presentations on imaging methods, and the limitations of these modalities in the management of the disease. This study aims to review the radiological and endoscopic imaging methods used for the management of CPN. © Acta Gastro-Enterologica Belgica.

  7. Hybrid method (JM-ECS) combining the J-matrix and exterior complex scaling methods for scattering calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vanroose, W.; Broeckhove, J.; Arickx, F.

    The paper proposes a hybrid method for calculating scattering processes. It combines the J-matrix method with exterior complex scaling and an absorbing boundary condition. The wave function is represented as a finite sum of oscillator eigenstates in the inner region, and it is discretized on a grid in the outer region. The method is validated for a one- and a two-dimensional model with partial wave equations and a calculation of p-shell nuclear scattering with semirealistic interactions.

  8. Simultaneous storage of medical images in the spatial and frequency domain: a comparative study.

    PubMed

    Nayak, Jagadish; Bhat, P Subbanna; Acharya U, Rajendra; Uc, Niranjan

    2004-06-05

    Digital watermarking is a technique of hiding specific identification data for copyright authentication. This technique is adapted here for interleaving patient information with medical images, to reduce storage and transmission overheads. The patient information is encrypted before interleaving with images to ensure greater security. The bio-signals are compressed and subsequently interleaved with the image. This interleaving is carried out in the spatial domain and Frequency domain. The performance of interleaving in the spatial, Discrete Fourier Transform (DFT), Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) coefficients is studied. Differential pulse code modulation (DPCM) is employed for data compression as well as encryption and results are tabulated for a specific example. It can be seen from results, the process does not affect the picture quality. This is attributed to the fact that the change in LSB of a pixel changes its brightness by 1 part in 256. Spatial and DFT domain interleaving gave very less %NRMSE as compared to DCT and DWT domain. The Results show that spatial domain the interleaving, the %NRMSE was less than 0.25% for 8-bit encoded pixel intensity. Among the frequency domain interleaving methods, DFT was found to be very efficient.

  9. Excitation of Continuous and Discrete Modes in Incompressible Boundary Layers

    NASA Technical Reports Server (NTRS)

    Ashpis, David E.; Reshotko, Eli

    1998-01-01

    This report documents the full details of the condensed journal article by Ashpis & Reshotko (JFM, 1990) entitled "The Vibrating Ribbon Problem Revisited." A revised formal solution of the vibrating ribbon problem of hydrodynamic stability is presented. The initial formulation of Gaster (JFM, 1965) is modified by application of the Briggs method and a careful treatment of the complex double Fourier transform inversions. Expressions are obtained in a natural way for the discrete spectrum as well as for the four branches of the continuous spectra. These correspond to discrete and branch-cut singularities in the complex wave-number plane. The solutions from the continuous spectra decay both upstream and downstream of the ribbon, with the decay in the upstream direction being much more rapid than that in the downstream direction. Comments and clarification of related prior work are made.

  10. Edge detection for optical synthetic aperture based on deep neural network

    NASA Astrophysics Data System (ADS)

    Tan, Wenjie; Hui, Mei; Liu, Ming; Kong, Lingqin; Dong, Liquan; Zhao, Yuejin

    2017-09-01

    Synthetic aperture optics systems can meet the demands of the next-generation space telescopes being lighter, larger and foldable. However, the boundaries of segmented aperture systems are much more complex than that of the whole aperture. More edge regions mean more imaging edge pixels, which are often mixed and discretized. In order to achieve high-resolution imaging, it is necessary to identify the gaps between the sub-apertures and the edges of the projected fringes. In this work, we introduced the algorithm of Deep Neural Network into the edge detection of optical synthetic aperture imaging. According to the detection needs, we constructed image sets by experiments and simulations. Based on MatConvNet, a toolbox of MATLAB, we ran the neural network, trained it on training image set and tested its performance on validation set. The training was stopped when the test error on validation set stopped declining. As an input image is given, each intra-neighbor area around the pixel is taken into the network, and scanned pixel by pixel with the trained multi-hidden layers. The network outputs make a judgment on whether the center of the input block is on edge of fringes. We experimented with various pre-processing and post-processing techniques to reveal their influence on edge detection performance. Compared with the traditional algorithms or their improvements, our method makes decision on a much larger intra-neighbor, and is more global and comprehensive. Experiments on more than 2,000 images are also given to prove that our method outperforms classical algorithms in optical images-based edge detection.

  11. The Selection of Computed Tomography Scanning Schemes for Lengthy Symmetric Objects

    NASA Astrophysics Data System (ADS)

    Trinh, V. B.; Zhong, Y.; Osipov, S. P.

    2017-04-01

    . The article describes the basic computed tomography scan schemes for lengthy symmetric objects: continuous (discrete) rotation with a discrete linear movement; continuous (discrete) rotation with discrete linear movement to acquire 2D projection; continuous (discrete) linear movement with discrete rotation to acquire one-dimensional projection and continuous (discrete) rotation to acquire of 2D projection. The general method to calculate the scanning time is discussed in detail. It should be extracted the comparison principle to select a scanning scheme. This is because data are the same for all scanning schemes: the maximum energy of the X-ray radiation; the power of X-ray radiation source; the angle of the X-ray cone beam; the transverse dimension of a single detector; specified resolution and the maximum time, which is need to form one point of the original image and complies the number of registered photons). It demonstrates the possibilities of the above proposed method to compare the scanning schemes. Scanning object was a cylindrical object with the mass thickness is 4 g/cm2, the effective atomic number is 15 and length is 1300 mm. It analyzes data of scanning time and concludes about the efficiency of scanning schemes. It examines the productivity of all schemes and selects the effective one.

  12. Functional entropy variables: A new methodology for deriving thermodynamically consistent algorithms for complex fluids, with particular reference to the isothermal Navier–Stokes–Korteweg equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Ju, E-mail: jliu@ices.utexas.edu; Gomez, Hector; Evans, John A.

    2013-09-01

    We propose a new methodology for the numerical solution of the isothermal Navier–Stokes–Korteweg equations. Our methodology is based on a semi-discrete Galerkin method invoking functional entropy variables, a generalization of classical entropy variables, and a new time integration scheme. We show that the resulting fully discrete scheme is unconditionally stable-in-energy, second-order time-accurate, and mass-conservative. We utilize isogeometric analysis for spatial discretization and verify the aforementioned properties by adopting the method of manufactured solutions and comparing coarse mesh solutions with overkill solutions. Various problems are simulated to show the capability of the method. Our methodology provides a means of constructing unconditionallymore » stable numerical schemes for nonlinear non-convex hyperbolic systems of conservation laws.« less

  13. High-Order Semi-Discrete Central-Upwind Schemes for Multi-Dimensional Hamilton-Jacobi Equations

    NASA Technical Reports Server (NTRS)

    Bryson, Steve; Levy, Doron; Biegel, Bryan (Technical Monitor)

    2002-01-01

    We present the first fifth order, semi-discrete central upwind method for approximating solutions of multi-dimensional Hamilton-Jacobi equations. Unlike most of the commonly used high order upwind schemes, our scheme is formulated as a Godunov-type scheme. The scheme is based on the fluxes of Kurganov-Tadmor and Kurganov-Tadmor-Petrova, and is derived for an arbitrary number of space dimensions. A theorem establishing the monotonicity of these fluxes is provided. The spacial discretization is based on a weighted essentially non-oscillatory reconstruction of the derivative. The accuracy and stability properties of our scheme are demonstrated in a variety of examples. A comparison between our method and other fifth-order schemes for Hamilton-Jacobi equations shows that our method exhibits smaller errors without any increase in the complexity of the computations.

  14. A hybrid neural learning algorithm using evolutionary learning and derivative free local search method.

    PubMed

    Ghosh, Ranadhir; Yearwood, John; Ghosh, Moumita; Bagirov, Adil

    2006-06-01

    In this paper we investigate a hybrid model based on the Discrete Gradient method and an evolutionary strategy for determining the weights in a feed forward artificial neural network. Also we discuss different variants for hybrid models using the Discrete Gradient method and an evolutionary strategy for determining the weights in a feed forward artificial neural network. The Discrete Gradient method has the advantage of being able to jump over many local minima and find very deep local minima. However, earlier research has shown that a good starting point for the discrete gradient method can improve the quality of the solution point. Evolutionary algorithms are best suited for global optimisation problems. Nevertheless they are cursed with longer training times and often unsuitable for real world application. For optimisation problems such as weight optimisation for ANNs in real world applications the dimensions are large and time complexity is critical. Hence the idea of a hybrid model can be a suitable option. In this paper we propose different fusion strategies for hybrid models combining the evolutionary strategy with the discrete gradient method to obtain an optimal solution much quicker. Three different fusion strategies are discussed: a linear hybrid model, an iterative hybrid model and a restricted local search hybrid model. Comparative results on a range of standard datasets are provided for different fusion hybrid models.

  15. Integral imaging based light field display with enhanced viewing resolution using holographic diffuser

    NASA Astrophysics Data System (ADS)

    Yan, Zhiqiang; Yan, Xingpeng; Jiang, Xiaoyu; Gao, Hui; Wen, Jun

    2017-11-01

    An integral imaging based light field display method is proposed by use of holographic diffuser, and enhanced viewing resolution is gained over conventional integral imaging systems. The holographic diffuser is fabricated with controlled diffusion characteristics, which interpolates the discrete light field of the reconstructed points to approximate the original light field. The viewing resolution can thus be improved and independent of the limitation imposed by Nyquist sampling frequency. An integral imaging system with low Nyquist sampling frequency is constructed, and reconstructed scenes of high viewing resolution using holographic diffuser are demonstrated, verifying the feasibility of the method.

  16. Application of linearized inverse scattering methods for the inspection in steel plates embedded in concrete structures

    NASA Astrophysics Data System (ADS)

    Tsunoda, Takaya; Suzuki, Keigo; Saitoh, Takahiro

    2018-04-01

    This study develops a method to visualize the state of steel-concrete interface with ultrasonic testing. Scattered waves are obtained by the UT pitch-catch mode from the surface of the concrete. Discrete wavelet transform is applied in order to extract echoes scattered from the steel-concrete interface. Then Linearized Inverse Scattering Methods are used for imaging the interface. The results show that LISM with Born and Kirchhoff approximation provide clear images for the target.

  17. Complex DNA Brick Assembly.

    PubMed

    Ong, Luvena L; Ke, Yonggang

    2017-01-01

    DNA nanostructures are a useful technology for precisely organizing and manipulating nanomaterials. The DNA bricks method is a modular and versatile platform for applications requiring discrete or periodic structures with complex three-dimensional features. Here, we describe how structures are designed from the fundamental strand architecture through assembly and characterization of the formed structures.

  18. An efficient method for the computation of Legendre moments.

    PubMed

    Yap, Pew-Thian; Paramesran, Raveendran

    2005-12-01

    Legendre moments are continuous moments, hence, when applied to discrete-space images, numerical approximation is involved and error occurs. This paper proposes a method to compute the exact values of the moments by mathematically integrating the Legendre polynomials over the corresponding intervals of the image pixels. Experimental results show that the values obtained match those calculated theoretically, and the image reconstructed from these moments have lower error than that of the conventional methods for the same order. Although the same set of exact Legendre moments can be obtained indirectly from the set of geometric moments, the computation time taken is much longer than the proposed method.

  19. Error and Complexity Analysis for a Collocation-Grid-Projection Plus Precorrected-FFT Algorithm for Solving Potential Integral Equations with LaPlace or Helmholtz Kernels

    NASA Technical Reports Server (NTRS)

    Phillips, J. R.

    1996-01-01

    In this paper we derive error bounds for a collocation-grid-projection scheme tuned for use in multilevel methods for solving boundary-element discretizations of potential integral equations. The grid-projection scheme is then combined with a precorrected FFT style multilevel method for solving potential integral equations with 1/r and e(sup ikr)/r kernels. A complexity analysis of this combined method is given to show that for homogeneous problems, the method is order n natural log n nearly independent of the kernel. In addition, it is shown analytically and experimentally that for an inhomogeneity generated by a very finely discretized surface, the combined method slows to order n(sup 4/3). Finally, examples are given to show that the collocation-based grid-projection plus precorrected-FFT scheme is competitive with fast-multipole algorithms when considering realistic problems and 1/r kernels, but can be used over a range of spatial frequencies with only a small performance penalty.

  20. Terahertz imaging devices and systems, and related methods, for detection of materials

    DOEpatents

    Kotter, Dale K.

    2016-11-15

    Terahertz imaging devices may comprise a focal plane array including a substrate and a plurality of resonance elements. The plurality of resonance elements may comprise a conductive material coupled to the substrate. Each resonance element of the plurality of resonance elements may be configured to resonate and produce an output signal responsive to incident radiation having a frequency between about a 0.1 THz and 4 THz range. A method of detecting a hazardous material may comprise receiving incident radiation by a focal plane array having a plurality of discrete pixels including a resonance element configured to absorb the incident radiation at a resonant frequency in the THz, generating an output signal from each of the discrete pixels, and determining a presence of a hazardous material by interpreting spectral information from the output signal.

  1. NIR DLP hyperspectral imaging system for medical applications

    NASA Astrophysics Data System (ADS)

    Wehner, Eleanor; Thapa, Abhas; Livingston, Edward; Zuzak, Karel

    2011-03-01

    DLP® hyperspectral reflectance imaging in the visible range has been previously shown to quantify hemoglobin oxygenation in subsurface tissues, 1 mm to 2 mm deep. Extending the spectral range into the near infrared reflects biochemical information from deeper subsurface tissues. Unlike any other illumination method, the digital micro-mirror device, DMD, chip is programmable, allowing the user to actively illuminate with precisely predetermined spectra of illumination with a minimum bandpass of approximately 10 nm. It is possible to construct active spectral-based illumination that includes but is not limited to containing sharp cutoffs to act as filters or forming complex spectra, varying the intensity of light at discrete wavelengths. We have characterized and tested a pure NIR, 760 nm to 1600 nm, DLP hyperspectral reflectance imaging system. In its simplest application, the NIR system can be used to quantify the percentage of water in a subject, enabling edema visualization. It can also be used to map vein structure in a patient in real time. During gall bladder surgery, this system could be invaluable in imaging bile through fatty tissue, aiding surgeons in locating the common bile duct in real time without injecting any contrast agents.

  2. Directional dual-tree complex wavelet packet transforms for processing quadrature signals.

    PubMed

    Serbes, Gorkem; Gulcur, Halil Ozcan; Aydin, Nizamettin

    2016-03-01

    Quadrature signals containing in-phase and quadrature-phase components are used in many signal processing applications in every field of science and engineering. Specifically, Doppler ultrasound systems used to evaluate cardiovascular disorders noninvasively also result in quadrature format signals. In order to obtain directional blood flow information, the quadrature outputs have to be preprocessed using methods such as asymmetrical and symmetrical phasing filter techniques. These resultant directional signals can be employed in order to detect asymptomatic embolic signals caused by small emboli, which are indicators of a possible future stroke, in the cerebral circulation. Various transform-based methods such as Fourier and wavelet were frequently used in processing embolic signals. However, most of the times, the Fourier and discrete wavelet transforms are not appropriate for the analysis of embolic signals due to their non-stationary time-frequency behavior. Alternatively, discrete wavelet packet transform can perform an adaptive decomposition of the time-frequency axis. In this study, directional discrete wavelet packet transforms, which have the ability to map directional information while processing quadrature signals and have less computational complexity than the existing wavelet packet-based methods, are introduced. The performances of proposed methods are examined in detail by using single-frequency, synthetic narrow-band, and embolic quadrature signals.

  3. SPECT reconstruction using DCT-induced tight framelet regularization

    NASA Astrophysics Data System (ADS)

    Zhang, Jiahan; Li, Si; Xu, Yuesheng; Schmidtlein, C. R.; Lipson, Edward D.; Feiglin, David H.; Krol, Andrzej

    2015-03-01

    Wavelet transforms have been successfully applied in many fields of image processing. Yet, to our knowledge, they have never been directly incorporated to the objective function in Emission Computed Tomography (ECT) image reconstruction. Our aim has been to investigate if the ℓ1-norm of non-decimated discrete cosine transform (DCT) coefficients of the estimated radiotracer distribution could be effectively used as the regularization term for the penalized-likelihood (PL) reconstruction, where a regularizer is used to enforce the image smoothness in the reconstruction. In this study, the ℓ1-norm of 2D DCT wavelet decomposition was used as a regularization term. The Preconditioned Alternating Projection Algorithm (PAPA), which we proposed in earlier work to solve penalized likelihood (PL) reconstruction with non-differentiable regularizers, was used to solve this optimization problem. The DCT wavelet decompositions were performed on the transaxial reconstructed images. We reconstructed Monte Carlo simulated SPECT data obtained for a numerical phantom with Gaussian blobs as hot lesions and with a warm random lumpy background. Reconstructed images using the proposed method exhibited better noise suppression and improved lesion conspicuity, compared with images reconstructed using expectation maximization (EM) algorithm with Gaussian post filter (GPF). Also, the mean square error (MSE) was smaller, compared with EM-GPF. A critical and challenging aspect of this method was selection of optimal parameters. In summary, our numerical experiments demonstrated that the ℓ1-norm of discrete cosine transform (DCT) wavelet frame transform DCT regularizer shows promise for SPECT image reconstruction using PAPA method.

  4. Genetics algorithm optimization of DWT-DCT based image Watermarking

    NASA Astrophysics Data System (ADS)

    Budiman, Gelar; Novamizanti, Ledya; Iwut, Iwan

    2017-01-01

    Data hiding in an image content is mandatory for setting the ownership of the image. Two dimensions discrete wavelet transform (DWT) and discrete cosine transform (DCT) are proposed as transform method in this paper. First, the host image in RGB color space is converted to selected color space. We also can select the layer where the watermark is embedded. Next, 2D-DWT transforms the selected layer obtaining 4 subband. We select only one subband. And then block-based 2D-DCT transforms the selected subband. Binary-based watermark is embedded on the AC coefficients of each block after zigzag movement and range based pixel selection. Delta parameter replacing pixels in each range represents embedded bit. +Delta represents bit “1” and -delta represents bit “0”. Several parameters to be optimized by Genetics Algorithm (GA) are selected color space, layer, selected subband of DWT decomposition, block size, embedding range, and delta. The result of simulation performs that GA is able to determine the exact parameters obtaining optimum imperceptibility and robustness, in any watermarked image condition, either it is not attacked or attacked. DWT process in DCT based image watermarking optimized by GA has improved the performance of image watermarking. By five attacks: JPEG 50%, resize 50%, histogram equalization, salt-pepper and additive noise with variance 0.01, robustness in the proposed method has reached perfect watermark quality with BER=0. And the watermarked image quality by PSNR parameter is also increased about 5 dB than the watermarked image quality from previous method.

  5. Double Density Dual Tree Discrete Wavelet Transform implementation for Degraded Image Enhancement

    NASA Astrophysics Data System (ADS)

    Vimala, C.; Aruna Priya, P.

    2018-04-01

    Wavelet transform is a main tool for image processing applications in modern existence. A Double Density Dual Tree Discrete Wavelet Transform is used and investigated for image denoising. Images are considered for the analysis and the performance is compared with discrete wavelet transform and the Double Density DWT. Peak Signal to Noise Ratio values and Root Means Square error are calculated in all the three wavelet techniques for denoised images and the performance has evaluated. The proposed techniques give the better performance when comparing other two wavelet techniques.

  6. Computation of Steady and Unsteady Laminar Flames: Theory

    NASA Technical Reports Server (NTRS)

    Hagstrom, Thomas; Radhakrishnan, Krishnan; Zhou, Ruhai

    1999-01-01

    In this paper we describe the numerical analysis underlying our efforts to develop an accurate and reliable code for simulating flame propagation using complex physical and chemical models. We discuss our spatial and temporal discretization schemes, which in our current implementations range in order from two to six. In space we use staggered meshes to define discrete divergence and gradient operators, allowing us to approximate complex diffusion operators while maintaining ellipticity. Our temporal discretization is based on the use of preconditioning to produce a highly efficient linearly implicit method with good stability properties. High order for time accurate simulations is obtained through the use of extrapolation or deferred correction procedures. We also discuss our techniques for computing stationary flames. The primary issue here is the automatic generation of initial approximations for the application of Newton's method. We use a novel time-stepping procedure, which allows the dynamic updating of the flame speed and forces the flame front towards a specified location. Numerical experiments are presented, primarily for the stationary flame problem. These illustrate the reliability of our techniques, and the dependence of the results on various code parameters.

  7. Numerical Method for Darcy Flow Derived Using Discrete Exterior Calculus

    NASA Astrophysics Data System (ADS)

    Hirani, A. N.; Nakshatrala, K. B.; Chaudhry, J. H.

    2015-05-01

    We derive a numerical method for Darcy flow, and also for Poisson's equation in mixed (first order) form, based on discrete exterior calculus (DEC). Exterior calculus is a generalization of vector calculus to smooth manifolds and DEC is one of its discretizations on simplicial complexes such as triangle and tetrahedral meshes. DEC is a coordinate invariant discretization, in that it does not depend on the embedding of the simplices or the whole mesh. We start by rewriting the governing equations of Darcy flow using the language of exterior calculus. This yields a formulation in terms of flux differential form and pressure. The numerical method is then derived by using the framework provided by DEC for discretizing differential forms and operators that act on forms. We also develop a discretization for a spatially dependent Hodge star that varies with the permeability of the medium. This also allows us to address discontinuous permeability. The matrix representation for our discrete non-homogeneous Hodge star is diagonal, with positive diagonal entries. The resulting linear system of equations for flux and pressure are saddle type, with a diagonal matrix as the top left block. The performance of the proposed numerical method is illustrated on many standard test problems. These include patch tests in two and three dimensions, comparison with analytically known solutions in two dimensions, layered medium with alternating permeability values, and a test with a change in permeability along the flow direction. We also show numerical evidence of convergence of the flux and the pressure. A convergence experiment is included for Darcy flow on a surface. A short introduction to the relevant parts of smooth and discrete exterior calculus is included in this article. We also include a discussion of the boundary condition in terms of exterior calculus.

  8. An Efficient Image Compressor for Charge Coupled Devices Camera

    PubMed Central

    Li, Jin; Xing, Fei; You, Zheng

    2014-01-01

    Recently, the discrete wavelet transforms- (DWT-) based compressor, such as JPEG2000 and CCSDS-IDC, is widely seen as the state of the art compression scheme for charge coupled devices (CCD) camera. However, CCD images project on the DWT basis to produce a large number of large amplitude high-frequency coefficients because these images have a large number of complex texture and contour information, which are disadvantage for the later coding. In this paper, we proposed a low-complexity posttransform coupled with compressing sensing (PT-CS) compression approach for remote sensing image. First, the DWT is applied to the remote sensing image. Then, a pair base posttransform is applied to the DWT coefficients. The pair base are DCT base and Hadamard base, which can be used on the high and low bit-rate, respectively. The best posttransform is selected by the l p-norm-based approach. The posttransform is considered as the sparse representation stage of CS. The posttransform coefficients are resampled by sensing measurement matrix. Experimental results on on-board CCD camera images show that the proposed approach significantly outperforms the CCSDS-IDC-based coder, and its performance is comparable to that of the JPEG2000 at low bit rate and it does not have the high excessive implementation complexity of JPEG2000. PMID:25114977

  9. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  10. Application of time series discretization using evolutionary programming for classification of precancerous cervical lesions.

    PubMed

    Acosta-Mesa, Héctor-Gabriel; Rechy-Ramírez, Fernando; Mezura-Montes, Efrén; Cruz-Ramírez, Nicandro; Hernández Jiménez, Rodolfo

    2014-06-01

    In this work, we present a novel application of time series discretization using evolutionary programming for the classification of precancerous cervical lesions. The approach optimizes the number of intervals in which the length and amplitude of the time series should be compressed, preserving the important information for classification purposes. Using evolutionary programming, the search for a good discretization scheme is guided by a cost function which considers three criteria: the entropy regarding the classification, the complexity measured as the number of different strings needed to represent the complete data set, and the compression rate assessed as the length of the discrete representation. This discretization approach is evaluated using a time series data based on temporal patterns observed during a classical test used in cervical cancer detection; the classification accuracy reached by our method is compared with the well-known times series discretization algorithm SAX and the dimensionality reduction method PCA. Statistical analysis of the classification accuracy shows that the discrete representation is as efficient as the complete raw representation for the present application, reducing the dimensionality of the time series length by 97%. This representation is also very competitive in terms of classification accuracy when compared with similar approaches. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. The comparison between SVD-DCT and SVD-DWT digital image watermarking

    NASA Astrophysics Data System (ADS)

    Wira Handito, Kurniawan; Fauzi, Zulfikar; Aminy Ma’ruf, Firda; Widyaningrum, Tanti; Muslim Lhaksmana, Kemas

    2018-03-01

    With internet, anyone can publish their creation into digital data simply, inexpensively, and absolutely easy to be accessed by everyone. However, the problem appears when anyone else claims that the creation is their property or modifies some part of that creation. It causes necessary protection of copyrights; one of the examples is with watermarking method in digital image. The application of watermarking technique on digital data, especially on image, enables total invisibility if inserted in carrier image. Carrier image will not undergo any decrease of quality and also the inserted image will not be affected by attack. In this paper, watermarking will be implemented on digital image using Singular Value Decomposition based on Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT) by expectation in good performance of watermarking result. In this case, trade-off happen between invisibility and robustness of image watermarking. In embedding process, image watermarking has a good quality for scaling factor < 0.1. The quality of image watermarking in decomposition level 3 is better than level 2 and level 1. Embedding watermark in low-frequency is robust to Gaussian blur attack, rescale, and JPEG compression, but in high-frequency is robust to Gaussian noise.

  12. A Composite Algorithm for Mixed Integer Constrained Nonlinear Optimization.

    DTIC Science & Technology

    1980-01-01

    de Silva [141, and Weisman and Wood [76). A particular direct search algorithm, the simplex method, has been cited for having the potential for...spaced discrete points on a line which makes the direction suitable for an efficient integer search technique based on Fibonacci numbers. Two...defined by a subset of variables. The complex algorithm is particularly well suited for this subspace search for two reasons. First, the complex method

  13. Detecting vortices in superconductors: Extracting one-dimensional topological singularities from a discretized complex scalar field

    DOE PAGES

    Phillips, Carolyn L.; Peterka, Tom; Karpeyev, Dmitry; ...

    2015-02-20

    In type II superconductors, the dynamics of superconducting vortices determine their transport properties. In the Ginzburg-Landau theory, vortices correspond to topological defects in the complex order parameter. Extracting their precise positions and motion from discretized numerical simulation data is an important, but challenging, task. In the past, vortices have mostly been detected by analyzing the magnitude of the complex scalar field representing the order parameter and visualized by corresponding contour plots and isosurfaces. However, these methods, primarily used for small-scale simulations, blur the fine details of the vortices, scale poorly to large-scale simulations, and do not easily enable isolating andmore » tracking individual vortices. In this paper, we present a method for exactly finding the vortex core lines from a complex order parameter field. With this method, vortices can be easily described at a resolution even finer than the mesh itself. The precise determination of the vortex cores allows the interplay of the vortices inside a model superconductor to be visualized in higher resolution than has previously been possible. Finally, by representing the field as the set of vortices, this method also massively reduces the data footprint of the simulations and provides the data structures for further analysis and feature tracking.« less

  14. Image Data Compression Having Minimum Perceptual Error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1997-01-01

    A method is presented for performing color or grayscale image compression that eliminates redundant and invisible image components. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The quantization matrix comprises visual masking by luminance and contrast technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  15. 3-D discrete analytical ridgelet transform.

    PubMed

    Helbert, David; Carré, Philippe; Andres, Eric

    2006-12-01

    In this paper, we propose an implementation of the 3-D Ridgelet transform: the 3-D discrete analytical Ridgelet transform (3-D DART). This transform uses the Fourier strategy for the computation of the associated 3-D discrete Radon transform. The innovative step is the definition of a discrete 3-D transform with the discrete analytical geometry theory by the construction of 3-D discrete analytical lines in the Fourier domain. We propose two types of 3-D discrete lines: 3-D discrete radial lines going through the origin defined from their orthogonal projections and 3-D planes covered with 2-D discrete line segments. These discrete analytical lines have a parameter called arithmetical thickness, allowing us to define a 3-D DART adapted to a specific application. Indeed, the 3-D DART representation is not orthogonal, It is associated with a flexible redundancy factor. The 3-D DART has a very simple forward/inverse algorithm that provides an exact reconstruction without any iterative method. In order to illustrate the potentiality of this new discrete transform, we apply the 3-D DART and its extension to the Local-DART (with smooth windowing) to the denoising of 3-D image and color video. These experimental results show that the simple thresholding of the 3-D DART coefficients is efficient.

  16. Multifocus watermarking approach based on discrete cosine transform.

    PubMed

    Waheed, Safa Riyadh; Alkawaz, Mohammed Hazim; Rehman, Amjad; Almazyad, Abdulaziz S; Saba, Tanzila

    2016-05-01

    Image fusion process consolidates data and information from various images of same sight into a solitary image. Each of the source images might speak to a fractional perspective of the scene, and contains both "pertinent" and "immaterial" information. In this study, a new image fusion method is proposed utilizing the Discrete Cosine Transform (DCT) to join the source image into a solitary minimized image containing more exact depiction of the sight than any of the individual source images. In addition, the fused image comes out with most ideal quality image without bending appearance or loss of data. DCT algorithm is considered efficient in image fusion. The proposed scheme is performed in five steps: (1) RGB colour image (input image) is split into three channels R, G, and B for source images. (2) DCT algorithm is applied to each channel (R, G, and B). (3) The variance values are computed for the corresponding 8 × 8 blocks of each channel. (4) Each block of R of source images is compared with each other based on the variance value and then the block with maximum variance value is selected to be the block in the new image. This process is repeated for all channels of source images. (5) Inverse discrete cosine transform is applied on each fused channel to convert coefficient values to pixel values, and then combined all the channels to generate the fused image. The proposed technique can potentially solve the problem of unwanted side effects such as blurring or blocking artifacts by reducing the quality of the subsequent image in image fusion process. The proposed approach is evaluated using three measurement units: the average of Q(abf), standard deviation, and peak Signal Noise Rate. The experimental results of this proposed technique have shown good results as compared with older techniques. © 2016 Wiley Periodicals, Inc.

  17. A discrete mesoscopic particle model of the mechanics of a multi-constituent arterial wall.

    PubMed

    Witthoft, Alexandra; Yazdani, Alireza; Peng, Zhangli; Bellini, Chiara; Humphrey, Jay D; Karniadakis, George Em

    2016-01-01

    Blood vessels have unique properties that allow them to function together within a complex, self-regulating network. The contractile capacity of the wall combined with complex mechanical properties of the extracellular matrix enables vessels to adapt to changes in haemodynamic loading. Homogenized phenomenological and multi-constituent, structurally motivated continuum models have successfully captured these mechanical properties, but truly describing intricate microstructural details of the arterial wall may require a discrete framework. Such an approach would facilitate modelling interactions between or the separation of layers of the wall and would offer the advantage of seamless integration with discrete models of complex blood flow. We present a discrete particle model of a multi-constituent, nonlinearly elastic, anisotropic arterial wall, which we develop using the dissipative particle dynamics method. Mimicking basic features of the microstructure of the arterial wall, the model comprises an elastin matrix having isotropic nonlinear elastic properties plus anisotropic fibre reinforcement that represents the stiffer collagen fibres of the wall. These collagen fibres are distributed evenly and are oriented in four directions, symmetric to the vessel axis. Experimental results from biaxial mechanical tests of an artery are used for model validation, and a delamination test is simulated to demonstrate the new capabilities of the model. © 2016 The Author(s).

  18. Improved result on stability analysis of discrete stochastic neural networks with time delay

    NASA Astrophysics Data System (ADS)

    Wu, Zhengguang; Su, Hongye; Chu, Jian; Zhou, Wuneng

    2009-04-01

    This Letter investigates the problem of exponential stability for discrete stochastic time-delay neural networks. By defining a novel Lyapunov functional, an improved delay-dependent exponential stability criterion is established in terms of linear matrix inequality (LMI) approach. Meanwhile, the computational complexity of the newly established stability condition is reduced because less variables are involved. Numerical example is given to illustrate the effectiveness and the benefits of the proposed method.

  19. The Information Content of Discrete Functions and Their Application in Genetic Data Analysis

    DOE PAGES

    Sakhanenko, Nikita A.; Kunert-Graf, James; Galas, David J.

    2017-10-13

    The complex of central problems in data analysis consists of three components: (1) detecting the dependence of variables using quantitative measures, (2) defining the significance of these dependence measures, and (3) inferring the functional relationships among dependent variables. We have argued previously that an information theory approach allows separation of the detection problem from the inference of functional form problem. We approach here the third component of inferring functional forms based on information encoded in the functions. Here, we present here a direct method for classifying the functional forms of discrete functions of three variables represented in data sets. Discretemore » variables are frequently encountered in data analysis, both as the result of inherently categorical variables and from the binning of continuous numerical variables into discrete alphabets of values. The fundamental question of how much information is contained in a given function is answered for these discrete functions, and their surprisingly complex relationships are illustrated. The all-important effect of noise on the inference of function classes is found to be highly heterogeneous and reveals some unexpected patterns. We apply this classification approach to an important area of biological data analysis—that of inference of genetic interactions. Genetic analysis provides a rich source of real and complex biological data analysis problems, and our general methods provide an analytical basis and tools for characterizing genetic problems and for analyzing genetic data. Finally, we illustrate the functional description and the classes of a number of common genetic interaction modes and also show how different modes vary widely in their sensitivity to noise.« less

  20. The Information Content of Discrete Functions and Their Application in Genetic Data Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sakhanenko, Nikita A.; Kunert-Graf, James; Galas, David J.

    The complex of central problems in data analysis consists of three components: (1) detecting the dependence of variables using quantitative measures, (2) defining the significance of these dependence measures, and (3) inferring the functional relationships among dependent variables. We have argued previously that an information theory approach allows separation of the detection problem from the inference of functional form problem. We approach here the third component of inferring functional forms based on information encoded in the functions. Here, we present here a direct method for classifying the functional forms of discrete functions of three variables represented in data sets. Discretemore » variables are frequently encountered in data analysis, both as the result of inherently categorical variables and from the binning of continuous numerical variables into discrete alphabets of values. The fundamental question of how much information is contained in a given function is answered for these discrete functions, and their surprisingly complex relationships are illustrated. The all-important effect of noise on the inference of function classes is found to be highly heterogeneous and reveals some unexpected patterns. We apply this classification approach to an important area of biological data analysis—that of inference of genetic interactions. Genetic analysis provides a rich source of real and complex biological data analysis problems, and our general methods provide an analytical basis and tools for characterizing genetic problems and for analyzing genetic data. Finally, we illustrate the functional description and the classes of a number of common genetic interaction modes and also show how different modes vary widely in their sensitivity to noise.« less

  1. Input-output identification of controlled discrete manufacturing systems

    NASA Astrophysics Data System (ADS)

    Estrada-Vargas, Ana Paula; López-Mellado, Ernesto; Lesage, Jean-Jacques

    2014-03-01

    The automated construction of discrete event models from observations of external system's behaviour is addressed. This problem, often referred to as system identification, allows obtaining models of ill-known (or even unknown) systems. In this article, an identification method for discrete event systems (DESs) controlled by a programmable logic controller is presented. The method allows processing a large quantity of observed long sequences of input/output signals generated by the controller and yields an interpreted Petri net model describing the closed-loop behaviour of the automated DESs. The proposed technique allows the identification of actual complex systems because it is sufficiently efficient and well adapted to cope with both the technological characteristics of industrial controllers and data collection requirements. Based on polynomial-time algorithms, the method is implemented as an efficient software tool which constructs and draws the model automatically; an overview of this tool is given through a case study dealing with an automated manufacturing system.

  2. Image gathering and restoration - Information and visual quality

    NASA Technical Reports Server (NTRS)

    Mccormick, Judith A.; Alter-Gartenberg, Rachel; Huck, Friedrich O.

    1989-01-01

    A method is investigated for optimizing the end-to-end performance of image gathering and restoration for visual quality. To achieve this objective, one must inevitably confront the problems that the visual quality of restored images depends on perceptual rather than mathematical considerations and that these considerations vary with the target, the application, and the observer. The method adopted in this paper is to optimize image gathering informationally and to restore images interactively to obtain the visually preferred trade-off among fidelity resolution, sharpness, and clarity. The results demonstrate that this method leads to significant improvements in the visual quality obtained by the traditional digital processing methods. These traditional methods allow a significant loss of visual quality to occur because they treat the design of the image-gathering system and the formulation of the image-restoration algorithm as two separate tasks and fail to account for the transformations between the continuous and the discrete representations in image gathering and reconstruction.

  3. Novel morphology change of Au-Methotrexate conjugates: From nanochains to discrete nanoparticles.

    PubMed

    Wang, Wei-Yuan; Zhao, Xiu-Fen; Ju, Xiao-Han; Wang, Yu; Wang, Lin; Li, Shu-Ping; Li, Xiao-Dong

    2016-12-30

    A novel morphology change of Au-methotrexate (Au-MTX) conjugates that could transform from nanochains to discrete nanoparticles was achieved by a simple, one-pot, and hydrothermal growth method. Herein, MTX was used efficiently as a complex-forming agent, reducing agent, capping agent, and importantly a targeting anticancer drug. The formation mechanism suggested a similarity with the molecular imprinting technology. The Au-MTX complex induced the MTX molecules to selectively adsorb on different crystal facets of gold nanoparticles (AuNPs) and then formed gold nanospheres. Moreover, the abundantly binding MTX molecules promoted directional alignment of these gold nanospheres to further form nanochains. More interestingly, the linear structures gradually changed into discrete nanoparticles by adding different amount of ethylene diamine tetra (methylene phosphonic acid) (EDTMPA) into the initial reaction solution, which likely arose from the strong electrostatic effect of the negatively charged phosphonic acid groups. Compared with the as-prepared nanochains, the resultant discrete nanoparticles showed almost equal drug loading capacity but with higher drug release control, colloidal stability, and in vitro anticancer activity. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Automated quadrilateral surface discretization method and apparatus usable to generate mesh in a finite element analysis system

    DOEpatents

    Blacker, Teddy D.

    1994-01-01

    An automatic quadrilateral surface discretization method and apparatus is provided for automatically discretizing a geometric region without decomposing the region. The automated quadrilateral surface discretization method and apparatus automatically generates a mesh of all quadrilateral elements which is particularly useful in finite element analysis. The generated mesh of all quadrilateral elements is boundary sensitive, orientation insensitive and has few irregular nodes on the boundary. A permanent boundary of the geometric region is input and rows are iteratively layered toward the interior of the geometric region. Also, an exterior permanent boundary and an interior permanent boundary for a geometric region may be input and the rows are iteratively layered inward from the exterior boundary in a first counter clockwise direction while the rows are iteratively layered from the interior permanent boundary toward the exterior of the region in a second clockwise direction. As a result, a high quality mesh for an arbitrary geometry may be generated with a technique that is robust and fast for complex geometric regions and extreme mesh gradations.

  5. Specular reflection treatment for the 3D radiative transfer equation solved with the discrete ordinates method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Le Hardy, D.; Favennec, Y., E-mail: yann.favennec@univ-nantes.fr; Rousseau, B.

    The contribution of this paper relies in the development of numerical algorithms for the mathematical treatment of specular reflection on borders when dealing with the numerical solution of radiative transfer problems. The radiative transfer equation being integro-differential, the discrete ordinates method allows to write down a set of semi-discrete equations in which weights are to be calculated. The calculation of these weights is well known to be based on either a quadrature or on angular discretization, making the use of such method straightforward for the state equation. Also, the diffuse contribution of reflection on borders is usually well taken intomore » account. However, the calculation of accurate partition ratio coefficients is much more tricky for the specular condition applied on arbitrary geometrical borders. This paper presents algorithms that calculate analytically partition ratio coefficients needed in numerical treatments. The developed algorithms, combined with a decentered finite element scheme, are validated with the help of comparisons with analytical solutions before being applied on complex geometries.« less

  6. A fast complex integer convolution using a hybrid transform

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; K Truong, T.

    1978-01-01

    It is shown that the Winograd transform can be combined with a complex integer transform over the Galois field GF(q-squared) to yield a new algorithm for computing the discrete cyclic convolution of complex number points. By this means a fast method for accurately computing the cyclic convolution of a sequence of complex numbers for long convolution lengths can be obtained. This new hybrid algorithm requires fewer multiplications than previous algorithms.

  7. The level of detail required in a deformable phantom to accurately perform quality assurance of deformable image registration

    NASA Astrophysics Data System (ADS)

    Saenz, Daniel L.; Kim, Hojin; Chen, Josephine; Stathakis, Sotirios; Kirby, Neil

    2016-09-01

    The primary purpose of the study was to determine how detailed deformable image registration (DIR) phantoms need to adequately simulate human anatomy and accurately assess the quality of DIR algorithms. In particular, how many distinct tissues are required in a phantom to simulate complex human anatomy? Pelvis and head-and-neck patient CT images were used for this study as virtual phantoms. Two data sets from each site were analyzed. The virtual phantoms were warped to create two pairs consisting of undeformed and deformed images. Otsu’s method was employed to create additional segmented image pairs of n distinct soft tissue CT number ranges (fat, muscle, etc). A realistic noise image was added to each image. Deformations were applied in MIM Software (MIM) and Velocity deformable multi-pass (DMP) and compared with the known warping. Images with more simulated tissue levels exhibit more contrast, enabling more accurate results. Deformation error (magnitude of the vector difference between known and predicted deformation) was used as a metric to evaluate how many CT number gray levels are needed for a phantom to serve as a realistic patient proxy. Stabilization of the mean deformation error was reached by three soft tissue levels for Velocity DMP and MIM, though MIM exhibited a persisting difference in accuracy between the discrete images and the unprocessed image pair. A minimum detail of three levels allows a realistic patient proxy for use with Velocity and MIM deformation algorithms.

  8. Discrete differential geometry: The nonplanar quadrilateral mesh

    NASA Astrophysics Data System (ADS)

    Twining, Carole J.; Marsland, Stephen

    2012-06-01

    We consider the problem of constructing a discrete differential geometry defined on nonplanar quadrilateral meshes. Physical models on discrete nonflat spaces are of inherent interest, as well as being used in applications such as computation for electromagnetism, fluid mechanics, and image analysis. However, the majority of analysis has focused on triangulated meshes. We consider two approaches: discretizing the tensor calculus, and a discrete mesh version of differential forms. While these two approaches are equivalent in the continuum, we show that this is not true in the discrete case. Nevertheless, we show that it is possible to construct mesh versions of the Levi-Civita connection (and hence the tensorial covariant derivative and the associated covariant exterior derivative), the torsion, and the curvature. We show how discrete analogs of the usual vector integral theorems are constructed in such a way that the appropriate conservation laws hold exactly on the mesh, rather than only as approximations to the continuum limit. We demonstrate the success of our method by constructing a mesh version of classical electromagnetism and discuss how our formalism could be used to deal with other physical models, such as fluids.

  9. 2D Seismic Imaging of Elastic Parameters by Frequency Domain Full Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Brossier, R.; Virieux, J.; Operto, S.

    2008-12-01

    Thanks to recent advances in parallel computing, full waveform inversion is today a tractable seismic imaging method to reconstruct physical parameters of the earth interior at different scales ranging from the near- surface to the deep crust. We present a massively parallel 2D frequency-domain full-waveform algorithm for imaging visco-elastic media from multi-component seismic data. The forward problem (i.e. the resolution of the frequency-domain 2D PSV elastodynamics equations) is based on low-order Discontinuous Galerkin (DG) method (P0 and/or P1 interpolations). Thanks to triangular unstructured meshes, the DG method allows accurate modeling of both body waves and surface waves in case of complex topography for a discretization of 10 to 15 cells per shear wavelength. The frequency-domain DG system is solved efficiently for multiple sources with the parallel direct solver MUMPS. The local inversion procedure (i.e. minimization of residuals between observed and computed data) is based on the adjoint-state method which allows to efficiently compute the gradient of the objective function. Applying the inversion hierarchically from the low frequencies to the higher ones defines a multiresolution imaging strategy which helps convergence towards the global minimum. In place of expensive Newton algorithm, the combined use of the diagonal terms of the approximate Hessian matrix and optimization algorithms based on quasi-Newton methods (Conjugate Gradient, LBFGS, ...) allows to improve the convergence of the iterative inversion. The distribution of forward problem solutions over processors driven by a mesh partitioning performed by METIS allows to apply most of the inversion in parallel. We shall present the main features of the parallel modeling/inversion algorithm, assess its scalability and illustrate its performances with realistic synthetic case studies.

  10. Quality data collection and management technology of aerospace complex product assembly process

    NASA Astrophysics Data System (ADS)

    Weng, Gang; Liu, Jianhua; He, Yongxi; Zhuang, Cunbo

    2017-04-01

    Aiming at solving problems of difficult management and poor traceability for discrete assembly process quality data, a data collection and management method is proposed which take the assembly process and BOM as the core. Data collection method base on workflow technology, data model base on BOM and quality traceability of assembly process is included in the method. Finally, assembly process quality data management system is developed and effective control and management of quality information for complex product assembly process is realized.

  11. Bridging consciousness and cognition in memory and perception: evidence for both state and strength processes.

    PubMed

    Aly, Mariam; Yonelinas, Andrew P

    2012-01-01

    Subjective experience indicates that mental states are discrete, in the sense that memories and perceptions readily come to mind in some cases, but are entirely unavailable to awareness in others. However, a long history of psychophysical research has indicated that the discrete nature of mental states is largely epiphenomenal and that mental processes vary continuously in strength. We used a novel combination of behavioral methodologies to examine the processes underlying perception of complex images: (1) analysis of receiver operating characteristics (ROCs), (2) a modification of the change-detection flicker paradigm, and (3) subjective reports of conscious experience. These methods yielded converging results showing that perceptual judgments reflect the combined, yet functionally independent, contributions of two processes available to conscious experience: a state process of conscious perception and a strength process of knowing; processes that correspond to recollection and familiarity in long-term memory. In addition, insights from the perception experiments led to the discovery of a new recollection phenomenon in a long-term memory change detection paradigm. The apparent incompatibility between subjective experience and theories of cognition can be understood within a unified state-strength framework that links consciousness to cognition across the domains of perception and memory.

  12. Bridging Consciousness and Cognition in Memory and Perception: Evidence for Both State and Strength Processes

    PubMed Central

    Aly, Mariam; Yonelinas, Andrew P.

    2012-01-01

    Subjective experience indicates that mental states are discrete, in the sense that memories and perceptions readily come to mind in some cases, but are entirely unavailable to awareness in others. However, a long history of psychophysical research has indicated that the discrete nature of mental states is largely epiphenomenal and that mental processes vary continuously in strength. We used a novel combination of behavioral methodologies to examine the processes underlying perception of complex images: (1) analysis of receiver operating characteristics (ROCs), (2) a modification of the change-detection flicker paradigm, and (3) subjective reports of conscious experience. These methods yielded converging results showing that perceptual judgments reflect the combined, yet functionally independent, contributions of two processes available to conscious experience: a state process of conscious perception and a strength process of knowing; processes that correspond to recollection and familiarity in long-term memory. In addition, insights from the perception experiments led to the discovery of a new recollection phenomenon in a long-term memory change detection paradigm. The apparent incompatibility between subjective experience and theories of cognition can be understood within a unified state-strength framework that links consciousness to cognition across the domains of perception and memory. PMID:22272314

  13. Dictionary Approaches to Image Compression and Reconstruction

    NASA Technical Reports Server (NTRS)

    Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.

    1998-01-01

    This paper proposes using a collection of parameterized waveforms, known as a dictionary, for the purpose of medical image compression. These waveforms, denoted as phi(sub gamma), are discrete time signals, where gamma represents the dictionary index. A dictionary with a collection of these waveforms is typically complete or overcomplete. Given such a dictionary, the goal is to obtain a representation image based on the dictionary. We examine the effectiveness of applying Basis Pursuit (BP), Best Orthogonal Basis (BOB), Matching Pursuits (MP), and the Method of Frames (MOF) methods for the compression of digitized radiological images with a wavelet-packet dictionary. The performance of these algorithms is studied for medical images with and without additive noise.

  14. Dictionary Approaches to Image Compression and Reconstruction

    NASA Technical Reports Server (NTRS)

    Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.

    1998-01-01

    This paper proposes using a collection of parameterized waveforms, known as a dictionary, for the purpose of medical image compression. These waveforms, denoted as lambda, are discrete time signals, where y represents the dictionary index. A dictionary with a collection of these waveforms Is typically complete or over complete. Given such a dictionary, the goal is to obtain a representation Image based on the dictionary. We examine the effectiveness of applying Basis Pursuit (BP), Best Orthogonal Basis (BOB), Matching Pursuits (MP), and the Method of Frames (MOF) methods for the compression of digitized radiological images with a wavelet-packet dictionary. The performance of these algorithms is studied for medical images with and without additive noise.

  15. A high-order staggered meshless method for elliptic problems

    DOE PAGES

    Trask, Nathaniel; Perego, Mauro; Bochev, Pavel Blagoveston

    2017-03-21

    Here, we present a new meshless method for scalar diffusion equations, which is motivated by their compatible discretizations on primal-dual grids. Unlike the latter though, our approach is truly meshless because it only requires the graph of nearby neighbor connectivity of the discretization points. This graph defines a local primal-dual grid complex with a virtual dual grid, in the sense that specification of the dual metric attributes is implicit in the method's construction. Our method combines a topological gradient operator on the local primal grid with a generalized moving least squares approximation of the divergence on the local dual grid. We show that the resulting approximation of the div-grad operator maintains polynomial reproduction to arbitrary orders and yields a meshless method, which attainsmore » $$O(h^{m})$$ convergence in both $L^2$- and $H^1$-norms, similar to mixed finite element methods. We demonstrate this convergence on curvilinear domains using manufactured solutions in two and three dimensions. Application of the new method to problems with discontinuous coefficients reveals solutions that are qualitatively similar to those of compatible mesh-based discretizations.« less

  16. F3D Image Processing and Analysis for Many - and Multi-core Platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    F3D is written in OpenCL, so it achieve[sic] platform-portable parallelism on modern mutli-core CPUs and many-core GPUs. The interface and mechanims to access F3D core are written in Java as a plugin for Fiji/ImageJ to deliver several key image-processing algorithms necessary to remove artifacts from micro-tomography data. The algorithms consist of data parallel aware filters that can efficiently utilizes[sic] resources and can work on out of core datasets and scale efficiently across multiple accelerators. Optimizing for data parallel filters, streaming out of core datasets, and efficient resource and memory and data managements over complex execution sequence of filters greatly expeditesmore » any scientific workflow with image processing requirements. F3D performs several different types of 3D image processing operations, such as non-linear filtering using bilateral filtering and/or median filtering and/or morphological operators (MM). F3D gray-level MM operators are one-pass constant time methods that can perform morphological transformations with a line-structuring element oriented in discrete directions. Additionally, MM operators can be applied to gray-scale images, and consist of two parts: (a) a reference shape or structuring element, which is translated over the image, and (b) a mechanism, or operation, that defines the comparisons to be performed between the image and the structuring element. This tool provides a critical component within many complex pipelines such as those for performing automated segmentation of image stacks. F3D is also called a "descendent" of Quant-CT, another software we developed in the past. These two modules are to be integrated in a next version. Further details were reported in: D.M. Ushizima, T. Perciano, H. Krishnan, B. Loring, H. Bale, D. Parkinson, and J. Sethian. Structure recognition from high-resolution images of ceramic composites. IEEE International Conference on Big Data, October 2014.« less

  17. Self-assembly of discrete metal complexes in aqueous solution via block copolypeptide amphiphiles.

    PubMed

    Kuroiwa, Keita; Masaki, Yoshitaka; Koga, Yuko; Deming, Timothy J

    2013-01-21

    The integration of discrete metal complexes has been attracting significant interest due to the potential of these materials for soft metal-metal interactions and supramolecular assembly. Additionally, block copolypeptide amphiphiles have been investigated concerning their capacity for self-assembly into structures such as nanoparticles, nanosheets and nanofibers. In this study, we combined these two concepts by investigating the self-assembly of discrete metal complexes in aqueous solution using block copolypeptides. Normally, discrete metal complexes such as [Au(CN)(2)]-, when molecularly dispersed in water, cannot interact with one another. Our results demonstrated, however, that the addition of block copolypeptide amphiphiles such as K(183)L(19) to [Au(CN)(2)]- solutions induced one-dimensional integration of the discrete metal complex, resulting in photoluminescence originating from multinuclear complexes with metal-metal interactions. Transmission electron microscopy (TEM) showed a fibrous nanostructure with lengths and widths of approximately 100 and 20 nm, respectively, which grew to form advanced nanoarchitectures, including those resembling the weave patterns of Waraji (traditional Japanese straw sandals). This concept of combining block copolypeptide amphiphiles with discrete coordination compounds allows the design of flexible and functional supramolecular coordination systems in water.

  18. Optical Flow Applied to Time-Lapse Image Series to Estimate Glacier Motion in the Southern Patagonia Ice Field

    NASA Astrophysics Data System (ADS)

    Lannutti, E.; Lenzano, M. G.; Toth, C.; Lenzano, L.; Rivera, A.

    2016-06-01

    In this work, we assessed the feasibility of using optical flow to obtain the motion estimation of a glacier. In general, former investigations used to detect glacier changes involve solutions that require repeated observations which are many times based on extensive field work. Taking into account glaciers are usually located in geographically complex and hard to access areas, deploying time-lapse imaging sensors, optical flow may provide an efficient solution at good spatial and temporal resolution to describe mass motion. Several studies in computer vision and image processing community have used this method to detect large displacements. Therefore, we carried out a test of the proposed Large Displacement Optical Flow method at the Viedma Glacier, located at South Patagonia Icefield, Argentina. We collected monoscopic terrestrial time-lapse imagery, acquired by a calibrated camera at every 24 hour from April 2014 until April 2015. A filter based on temporal correlation and RGB color discretization between the images was applied to minimize errors related to changes in lighting, shadows, clouds and snow. This selection allowed discarding images that do not follow a sequence of similarity. Our results show a flow field in the direction of the glacier movement with acceleration in the terminus. We analyzed the errors between image pairs, and the matching generally appears to be adequate, although some areas show random gross errors related to the presence of changes in lighting. The proposed technique allowed the determination of glacier motion during one year, providing accurate and reliable motion data for subsequent analysis.

  19. Unsupervised background-constrained tank segmentation of infrared images in complex background based on the Otsu method.

    PubMed

    Zhou, Yulong; Gao, Min; Fang, Dan; Zhang, Baoquan

    2016-01-01

    In an effort to implement fast and effective tank segmentation from infrared images in complex background, the threshold of the maximum between-class variance method (i.e., the Otsu method) is analyzed and the working mechanism of the Otsu method is discussed. Subsequently, a fast and effective method for tank segmentation from infrared images in complex background is proposed based on the Otsu method via constraining the complex background of the image. Considering the complexity of background, the original image is firstly divided into three classes of target region, middle background and lower background via maximizing the sum of their between-class variances. Then, the unsupervised background constraint is implemented based on the within-class variance of target region and hence the original image can be simplified. Finally, the Otsu method is applied to simplified image for threshold selection. Experimental results on a variety of tank infrared images (880 × 480 pixels) in complex background demonstrate that the proposed method enjoys better segmentation performance and even could be comparative with the manual segmentation in segmented results. In addition, its average running time is only 9.22 ms, implying the new method with good performance in real time processing.

  20. Integer cosine transform for image compression

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Pollara, F.; Shahshahani, M.

    1991-01-01

    This article describes a recently introduced transform algorithm called the integer cosine transform (ICT), which is used in transform-based data compression schemes. The ICT algorithm requires only integer operations on small integers and at the same time gives a rate-distortion performance comparable to that offered by the floating-point discrete cosine transform (DCT). The article addresses the issue of implementation complexity, which is of prime concern for source coding applications of interest in deep-space communications. Complexity reduction in the transform stage of the compression scheme is particularly relevant, since this stage accounts for most (typically over 80 percent) of the computational load.

  1. Subband directional vector quantization in radiological image compression

    NASA Astrophysics Data System (ADS)

    Akrout, Nabil M.; Diab, Chaouki; Prost, Remy; Goutte, Robert; Amiel, Michel

    1992-05-01

    The aim of this paper is to propose a new scheme for image compression. The method is very efficient for images which have directional edges such as the tree-like structure of the coronary vessels in digital angiograms. This method involves two steps. First, the original image is decomposed at different resolution levels using a pyramidal subband decomposition scheme. For decomposition/reconstruction of the image, free of aliasing and boundary errors, we use an ideal band-pass filter bank implemented in the Discrete Cosine Transform domain (DCT). Second, the high-frequency subbands are vector quantized using a multiresolution codebook with vertical and horizontal codewords which take into account the edge orientation of each subband. The proposed method reduces the blocking effect encountered at low bit rates in conventional vector quantization.

  2. A complex network approach for nanoparticle agglomeration analysis in nanoscale images

    NASA Astrophysics Data System (ADS)

    Machado, Bruno Brandoli; Scabini, Leonardo Felipe; Margarido Orue, Jonatan Patrick; de Arruda, Mauro Santos; Goncalves, Diogo Nunes; Goncalves, Wesley Nunes; Moreira, Raphaell; Rodrigues-Jr, Jose F.

    2017-02-01

    Complex networks have been widely used in science and technology because of their ability to represent several systems. One of these systems is found in Biochemistry, in which the synthesis of new nanoparticles is a hot topic. However, the interpretation of experimental results in the search of new nanoparticles poses several challenges. This is due to the characteristics of nanoparticle images and due to their multiple intricate properties; one property of recurrent interest is the agglomeration of particles. Addressing this issue, this paper introduces an approach that uses complex networks to detect and describe nanoparticle agglomerates so to foster easier and more insightful analyses. In this approach, each detected particle in an image corresponds to a vertice and the distances between the particles define a criterion for creating edges. Edges are created if the distance is smaller than a radius of interest. Once this network is set, we calculate several discrete measures able to reveal the most outstanding agglomerates in a nanoparticle image. Experimental results using images of scanning tunneling microscopy (STM) of gold nanoparticles demonstrated the effectiveness of the proposed approach over several samples, as reflected by the separability between particles in three usual settings. The results also demonstrated efficacy for both convex and non-convex agglomerates.

  3. Optimal resolution in maximum entropy image reconstruction from projections with multigrid acceleration

    NASA Technical Reports Server (NTRS)

    Limber, Mark A.; Manteuffel, Thomas A.; Mccormick, Stephen F.; Sholl, David S.

    1993-01-01

    We consider the problem of image reconstruction from a finite number of projections over the space L(sup 1)(Omega), where Omega is a compact subset of the set of Real numbers (exp 2). We prove that, given a discretization of the projection space, the function that generates the correct projection data and maximizes the Boltzmann-Shannon entropy is piecewise constant on a certain discretization of Omega, which we call the 'optimal grid'. It is on this grid that one obtains the maximum resolution given the problem setup. The size of this grid grows very quickly as the number of projections and number of cells per projection grow, indicating fast computational methods are essential to make its use feasible. We use a Fenchel duality formulation of the problem to keep the number of variables small while still using the optimal discretization, and propose a multilevel scheme to improve convergence of a simple cyclic maximization scheme applied to the dual problem.

  4. SfM with MRFs: discrete-continuous optimization for large-scale structure from motion.

    PubMed

    Crandall, David J; Owens, Andrew; Snavely, Noah; Huttenlocher, Daniel P

    2013-12-01

    Recent work in structure from motion (SfM) has built 3D models from large collections of images downloaded from the Internet. Many approaches to this problem use incremental algorithms that solve progressively larger bundle adjustment problems. These incremental techniques scale poorly as the image collection grows, and can suffer from drift or local minima. We present an alternative framework for SfM based on finding a coarse initial solution using hybrid discrete-continuous optimization and then improving that solution using bundle adjustment. The initial optimization step uses a discrete Markov random field (MRF) formulation, coupled with a continuous Levenberg-Marquardt refinement. The formulation naturally incorporates various sources of information about both the cameras and points, including noisy geotags and vanishing point (VP) estimates. We test our method on several large-scale photo collections, including one with measured camera positions, and show that it produces models that are similar to or better than those produced by incremental bundle adjustment, but more robustly and in a fraction of the time.

  5. Image Processing for Cameras with Fiber Bundle Image Relay

    DTIC Science & Technology

    length. Optical fiber bundles have been used to couple between this focal surface and planar image sensors . However, such fiber-coupled imaging systems...coupled to six discrete CMOS focal planes. We characterize the locally space-variant system impulse response at various stages: monocentric lens image...vignetting, and stitch together the image data from discrete sensors into a single panorama. We compare processed images from the prototype to those taken with

  6. Stability analysis of the Euler discretization for SIR epidemic model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suryanto, Agus

    2014-06-19

    In this paper we consider a discrete SIR epidemic model obtained by the Euler method. For that discrete model, existence of disease free equilibrium and endemic equilibrium is established. Sufficient conditions on the local asymptotical stability of both disease free equilibrium and endemic equilibrium are also derived. It is found that the local asymptotical stability of the existing equilibrium is achieved only for a small time step size h. If h is further increased and passes the critical value, then both equilibriums will lose their stability. Our numerical simulations show that a complex dynamical behavior such as bifurcation or chaosmore » phenomenon will appear for relatively large h. Both analytical and numerical results show that the discrete SIR model has a richer dynamical behavior than its continuous counterpart.« less

  7. Blind technique using blocking artifacts and entropy of histograms for image tampering detection

    NASA Astrophysics Data System (ADS)

    Manu, V. T.; Mehtre, B. M.

    2017-06-01

    The tremendous technological advancements in recent times has enabled people to create, edit and circulate images easily than ever before. As a result of this, ensuring the integrity and authenticity of the images has become challenging. Malicious editing of images to deceive the viewer is referred to as image tampering. A widely used image tampering technique is image splicing or compositing, in which regions from different images are copied and pasted. In this paper, we propose a tamper detection method utilizing the blocking and blur artifacts which are the footprints of splicing. The classification of images as tampered or not, is done based on the standard deviations of the entropy histograms and block discrete cosine transformations. We can detect the exact boundaries of the tampered area in the image, if the image is classified as tampered. Experimental results on publicly available image tampering datasets show that the proposed method outperforms the existing methods in terms of accuracy.

  8. SToRM: A numerical model for environmental surface flows

    USGS Publications Warehouse

    Simoes, Francisco J.

    2009-01-01

    SToRM (System for Transport and River Modeling) is a numerical model developed to simulate free surface flows in complex environmental domains. It is based on the depth-averaged St. Venant equations, which are discretized using unstructured upwind finite volume methods, and contains both steady and unsteady solution techniques. This article provides a brief description of the numerical approach selected to discretize the governing equations in space and time, including important aspects of solving natural environmental flows, such as the wetting and drying algorithm. The presentation is illustrated with several application examples, covering both laboratory and natural river flow cases, which show the model’s ability to solve complex flow phenomena.

  9. Semiautomated hybrid algorithm for estimation of three-dimensional liver surface in CT using dynamic cellular automata and level-sets

    PubMed Central

    Dakua, Sarada Prasad; Abinahed, Julien; Al-Ansari, Abdulla

    2015-01-01

    Abstract. Liver segmentation continues to remain a major challenge, largely due to its intense complexity with surrounding anatomical structures (stomach, kidney, and heart), high noise level and lack of contrast in pathological computed tomography (CT) data. We present an approach to reconstructing the liver surface in low contrast CT. The main contributions are: (1) a stochastic resonance-based methodology in discrete cosine transform domain is developed to enhance the contrast of pathological liver images, (2) a new formulation is proposed to prevent the object boundary, resulting from the cellular automata method, from leaking into the surrounding areas of similar intensity, and (3) a level-set method is suggested to generate intermediate segmentation contours from two segmented slices distantly located in a subject sequence. We have tested the algorithm on real datasets obtained from two sources, Hamad General Hospital and medical image computing and computer-assisted interventions grand challenge workshop. Various parameters in the algorithm, such as w, Δt, z, α, μ, α1, and α2, play imperative roles, thus their values are precisely selected. Both qualitative and quantitative evaluation performed on liver data show promising segmentation accuracy when compared with ground truth data reflecting the potential of the proposed method. PMID:26158101

  10. Multiscale-Driven approach to detecting change in Synthetic Aperture Radar (SAR) imagery

    NASA Astrophysics Data System (ADS)

    Gens, R.; Hogenson, K.; Ajadi, O. A.; Meyer, F. J.; Myers, A.; Logan, T. A.; Arnoult, K., Jr.

    2017-12-01

    Detecting changes between Synthetic Aperture Radar (SAR) images can be a useful but challenging exercise. SAR with its all-weather capabilities can be an important resource in identifying and estimating the expanse of events such as flooding, river ice breakup, earthquake damage, oil spills, and forest growth, as it can overcome shortcomings of optical methods related to cloud cover. However, detecting change in SAR imagery can be impeded by many factors including speckle, complex scattering responses, low temporal sampling, and difficulty delineating boundaries. In this presentation we use a change detection method based on a multiscale-driven approach. By using information at different resolution levels, we attempt to obtain more accurate change detection maps in both heterogeneous and homogeneous regions. Integrated within the processing flow are processes that 1) improve classification performance by combining Expectation-Maximization algorithms with mathematical morphology, 2) achieve high accuracy in preserving boundaries using measurement level fusion techniques, and 3) combine modern non-local filtering and 2D-discrete stationary wavelet transform to provide robustness against noise. This multiscale-driven approach to change detection has recently been incorporated into the Alaska Satellite Facility (ASF) Hybrid Pluggable Processing Pipeline (HyP3) using radiometrically terrain corrected SAR images. Examples primarily from natural hazards are presented to illustrate the capabilities and limitations of the change detection method.

  11. Quantitative Multispectral Analysis Of Discrete Subcellular Particles By Digital Imaging Fluorescence Microscopy (DIFM)

    NASA Astrophysics Data System (ADS)

    Dorey, C. K.; Ebenstein, David B.

    1988-10-01

    Subcellular localization of multiple biochemical markers is readily achieved through their characteristic autofluorescence or through use of appropriately labelled antibodies. Recent development of specific probes has permitted elegant studies in calcium and pH in living cells. However, each of these methods measured fluorescence at one wavelength; precise quantitation of multiple fluorophores at individual sites within a cell has not been possible. Using DIFM, we have achieved spectral analysis of discrete subcellular particles 1-2 gm in diameter. The fluorescence emission is broken into narrow bands by an interference monochromator and visualized through the combined use of a silicon intensified target (SIT) camera, a microcomputer based framegrabber with 8 bit resolution, and a color video monitor. Image acquisition, processing, analysis and display are under software control. The digitized image can be corrected for the spectral distortions induced by the wavelength dependent sensitivity of the camera, and the displayed image can be enhanced or presented in pseudocolor to facilitate discrimination of variation in pixel intensity of individual particles. For rapid comparison of the fluorophore composition of granules, a ratio image is produced by dividing the image captured at one wavelength by that captured at another. In the resultant ratio image, a granule which has a fluorophore composition different from the majority is selectively colored. This powerful system has been utilized to obtain spectra of endogenous autofluorescent compounds in discrete cellular organelles of human retinal pigment epithelium, and to measure immunohistochemically labelled components of the extracellular matrix associated with the human optic nerve.

  12. Multichannel High Resolution Wide Swath SAR Imaging for Hypersonic Air Vehicle with Curved Trajectory.

    PubMed

    Zhou, Rui; Sun, Jinping; Hu, Yuxin; Qi, Yaolong

    2018-01-31

    Synthetic aperture radar (SAR) equipped on the hypersonic air vehicle in near space has many advantages over the conventional airborne SAR. However, its high-speed maneuvering characteristics with curved trajectory result in serious range migration, and exacerbate the contradiction between the high resolution and wide swath. To solve this problem, this paper establishes the imaging geometrical model matched with the flight trajectory of the hypersonic platform and the multichannel azimuth sampling model based on the displaced phase center antenna (DPCA) technology. Furthermore, based on the multichannel signal reconstruction theory, a more efficient spectrum reconstruction model using discrete Fourier transform is proposed to obtain the azimuth uniform sampling data. Due to the high complexity of the slant range model, it is difficult to deduce the processing algorithm for SAR imaging. Thus, an approximate range model is derived based on the minimax criterion, and the optimal second-order approximate coefficients of cosine function are obtained using the two-population coevolutionary algorithm. On this basis, aiming at the problem that the traditional Omega-K algorithm cannot compensate the residual phase with the difficulty of Stolt mapping along the range frequency axis, this paper proposes an Exact Transfer Function (ETF) algorithm for SAR imaging, and presents a method of range division to achieve wide swath imaging. Simulation results verify the effectiveness of the ETF imaging algorithm.

  13. Multichannel High Resolution Wide Swath SAR Imaging for Hypersonic Air Vehicle with Curved Trajectory

    PubMed Central

    Zhou, Rui; Hu, Yuxin; Qi, Yaolong

    2018-01-01

    Synthetic aperture radar (SAR) equipped on the hypersonic air vehicle in near space has many advantages over the conventional airborne SAR. However, its high-speed maneuvering characteristics with curved trajectory result in serious range migration, and exacerbate the contradiction between the high resolution and wide swath. To solve this problem, this paper establishes the imaging geometrical model matched with the flight trajectory of the hypersonic platform and the multichannel azimuth sampling model based on the displaced phase center antenna (DPCA) technology. Furthermore, based on the multichannel signal reconstruction theory, a more efficient spectrum reconstruction model using discrete Fourier transform is proposed to obtain the azimuth uniform sampling data. Due to the high complexity of the slant range model, it is difficult to deduce the processing algorithm for SAR imaging. Thus, an approximate range model is derived based on the minimax criterion, and the optimal second-order approximate coefficients of cosine function are obtained using the two-population coevolutionary algorithm. On this basis, aiming at the problem that the traditional Omega-K algorithm cannot compensate the residual phase with the difficulty of Stolt mapping along the range frequency axis, this paper proposes an Exact Transfer Function (ETF) algorithm for SAR imaging, and presents a method of range division to achieve wide swath imaging. Simulation results verify the effectiveness of the ETF imaging algorithm. PMID:29385059

  14. Emerging Techniques in Stratified Designs and Continuous Gradients for Tissue Engineering of Interfaces

    PubMed Central

    Dormer, Nathan H.; Berkland, Cory J.; Detamore, Michael S.

    2013-01-01

    Interfacial tissue engineering is an emerging branch of regenerative medicine, where engineers are faced with developing methods for the repair of one or many functional tissue systems simultaneously. Early and recent solutions for complex tissue formation have utilized stratified designs, where scaffold formulations are segregated into two or more layers, with discrete changes in physical or chemical properties, mimicking a corresponding number of interfacing tissue types. This method has brought forth promising results, along with a myriad of regenerative techniques. The latest designs, however, are employing “continuous gradients” in properties, where there is no discrete segregation between scaffold layers. This review compares the methods and applications of recent stratified approaches to emerging continuously graded methods. PMID:20411333

  15. Image secure transmission for optical orthogonal frequency-division multiplexing visible light communication systems using chaotic discrete cosine transform

    NASA Astrophysics Data System (ADS)

    Wang, Zhongpeng; Zhang, Shaozhong; Chen, Fangni; Wu, Ming-Wei; Qiu, Weiwei

    2017-11-01

    A physical encryption scheme for orthogonal frequency-division multiplexing (OFDM) visible light communication (VLC) systems using chaotic discrete cosine transform (DCT) is proposed. In the scheme, the row of the DCT matrix is permutated by a scrambling sequence generated by a three-dimensional (3-D) Arnold chaos map. Furthermore, two scrambling sequences, which are also generated from a 3-D Arnold map, are employed to encrypt the real and imaginary parts of the transmitted OFDM signal before the chaotic DCT operation. The proposed scheme enhances the physical layer security and improves the bit error rate (BER) performance for OFDM-based VLC. The simulation results prove the efficiency of the proposed encryption method. The experimental results show that the proposed security scheme not only protects image data from eavesdroppers but also keeps the good BER and peak-to-average power ratio performances for image-based OFDM-VLC systems.

  16. Mapping Hydrothermal Alteration Zones at a Sediment-Hosted Gold Deposit - Goldstrike Mining District, Utah, Using Ground-Based Hyperspectral Imaging

    NASA Astrophysics Data System (ADS)

    Krupnik, D.; Khan, S.; Crockett, M.

    2017-12-01

    Understanding the origin, genesis, as well as depositional and structural mechanisms of gold mineralization as well as detailed mapping of gold-bearing mineral phases at centimeter scale can be useful for exploration. This work was conducted in the Goldstrike mining district near St. George, UT, a structurally complex region which contains Carlin-style disseminated gold deposits in permeable sedimentary layers near high-angle fault zones. These fault zones are likely a conduit for gold-bearing hydrothermal fluids, are silicified, and are frequently gold-bearing. Alteration patterns are complex, difficult to distinguish visually, composed of several phases, and vary significantly over centimeter to meter scale distances. This makes identifying and quantifying the extent of the target zones costly, time consuming, and discontinuous with traditional geochemical methods. A ground-based hyperspectral scanning system with sensors collecting data in the Visible Near Infrared (VNIR) and Short-Wave Infrared (SWIR) portions of the electromagnetic spectrum are utilized for close-range outcrop scanning. Scans were taken of vertical exposures of both gold-bearing and barren silicified rocks (jasperoids), with the intent to produce images which delineate and quantify the extent of each phase of alteration, in combination with discrete geochemical data. This ongoing study produces mineralogical maps of surface minerals at centimeter scale, with the intent of mapping original and alteration minerals. This efficient method of outcrop characterization increases our understanding of fluid flow and alteration of economic deposits.

  17. The Roadmaker's algorithm for the discrete pulse transform.

    PubMed

    Laurie, Dirk P

    2011-02-01

    The discrete pulse transform (DPT) is a decomposition of an observed signal into a sum of pulses, i.e., signals that are constant on a connected set and zero elsewhere. Originally developed for 1-D signal processing, the DPT has recently been generalized to more dimensions. Applications in image processing are currently being investigated. The time required to compute the DPT as originally defined via the successive application of LULU operators (members of a class of minimax filters studied by Rohwer) has been a severe drawback to its applicability. This paper introduces a fast method for obtaining such a decomposition, called the Roadmaker's algorithm because it involves filling pits and razing bumps. It acts selectively only on those features actually present in the signal, flattening them in order of increasing size by subtracing an appropriate positive or negative pulse, which is then appended to the decomposition. The implementation described here covers 1-D signal as well as two and 3-D image processing in a single framework. This is achieved by considering the signal or image as a function defined on a graph, with the geometry specified by the edges of the graph. Whenever a feature is flattened, nodes in the graph are merged, until eventually only one node remains. At that stage, a new set of edges for the same nodes as the graph, forming a tree structure, defines the obtained decomposition. The Roadmaker's algorithm is shown to be equivalent to the DPT in the sense of obtaining the same decomposition. However, its simpler operators are not in general equivalent to the LULU operators in situations where those operators are not applied successively. A by-product of the Roadmaker's algorithm is that it yields a proof of the so-called Highlight Conjecture, stated as an open problem in 2006. We pay particular attention to algorithmic details and complexity, including a demonstration that in the 1-D case, and also in the case of a complete graph, the Roadmaker's algorithm has optimal complexity: it runs in time O(m), where m is the number of arcs in the graph.

  18. Image-adapted visually weighted quantization matrices for digital image compression

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1994-01-01

    A method for performing image compression that eliminates redundant and invisible image components is presented. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  19. A Method to Prevent Protein Delocalization in Imaging Mass Spectrometry of Non-Adherent Tissues: Application to Small Vertebrate Lens Imaging

    PubMed Central

    Anderson, David M. G.; Floyd, Kyle A.; Barnes, Stephen; Clark, Judy M.; Clark, John I.; Mchaourab, Hassane; Schey, Kevin L.

    2015-01-01

    MALDI imaging requires careful sample preparation to obtain reliable, high quality images of small molecules, peptides, lipids, and proteins across tissue sections. Poor crystal formation, delocalization of analytes, and inadequate tissue adherence can affect the quality, reliability, and spatial resolution of MALDI images. We report a comparison of tissue mounting and washing methods that resulted in an optimized method using conductive carbon substrates that avoids thaw mounting or washing steps, minimizes protein delocalization, and prevents tissue detachment from the target surface. Application of this method to image ocular lens proteins of small vertebrate eyes demonstrates the improved methodology for imaging abundant crystallin protein products. This method was demonstrated for tissue sections from rat, mouse, and zebrafish lenses resulting in good quality MALDI images with little to no delocalization. The images indicate, for the first time in mouse and zebrafish, discrete localization of crystallin protein degradation products resulting in concentric rings of distinct protein contents that may be responsible for the refractive index gradient of vertebrate lenses. PMID:25665708

  20. Direct-to-digital holography reduction of reference hologram noise and fourier space smearing

    DOEpatents

    Voelkl, Edgar

    2006-06-27

    Systems and methods are described for reduction of reference hologram noise and reduction of Fourier space smearing, especially in the context of direct-to-digital holography (off-axis interferometry). A method of reducing reference hologram noise includes: recording a plurality of reference holograms; processing the plurality of reference holograms into a corresponding plurality of reference image waves; and transforming the corresponding plurality of reference image waves into a reduced noise reference image wave. A method of reducing smearing in Fourier space includes: recording a plurality of reference holograms; processing the plurality of reference holograms into a corresponding plurality of reference complex image waves; transforming the corresponding plurality of reference image waves into a reduced noise reference complex image wave; recording a hologram of an object; processing the hologram of the object into an object complex image wave; and dividing the complex image wave of the object by the reduced noise reference complex image wave to obtain a reduced smearing object complex image wave.

  1. Multiclass Data Segmentation using Diffuse Interface Methods on Graphs

    DTIC Science & Technology

    2014-01-01

    37] that performs interac- tive image segmentation using the solution to a combinatorial Dirichlet problem. Elmoataz et al . have developed general...izations of the graph Laplacian [25] for image denoising and manifold smoothing. Couprie et al . in [18] define a conve- niently parameterized graph...continuous setting carry over to the discrete graph representation. For general data segmentation, Bresson et al . in [8], present rigorous convergence

  2. Overset meshing coupled with hybridizable discontinuous Galerkin finite elements

    DOE PAGES

    Kauffman, Justin A.; Sheldon, Jason P.; Miller, Scott T.

    2017-03-01

    We introduce the use of hybridizable discontinuous Galerkin (HDG) finite element methods on overlapping (overset) meshes. Overset mesh methods are advantageous for solving problems on complex geometrical domains. We also combine geometric flexibility of overset methods with the advantages of HDG methods: arbitrarily high-order accuracy, reduced size of the global discrete problem, and the ability to solve elliptic, parabolic, and/or hyperbolic problems with a unified form of discretization. This approach to developing the ‘overset HDG’ method is to couple the global solution from one mesh to the local solution on the overset mesh. We present numerical examples for steady convection–diffusionmore » and static elasticity problems. The examples demonstrate optimal order convergence in all primal fields for an arbitrary amount of overlap of the underlying meshes.« less

  3. A pseudo-discrete algebraic reconstruction technique (PDART) prior image-based suppression of high density artifacts in computed tomography

    NASA Astrophysics Data System (ADS)

    Pua, Rizza; Park, Miran; Wi, Sunhee; Cho, Seungryong

    2016-12-01

    We propose a hybrid metal artifact reduction (MAR) approach for computed tomography (CT) that is computationally more efficient than a fully iterative reconstruction method, but at the same time achieves superior image quality to the interpolation-based in-painting techniques. Our proposed MAR method, an image-based artifact subtraction approach, utilizes an intermediate prior image reconstructed via PDART to recover the background information underlying the high density objects. For comparison, prior images generated by total-variation minimization (TVM) algorithm, as a realization of fully iterative approach, were also utilized as intermediate images. From the simulation and real experimental results, it has been shown that PDART drastically accelerates the reconstruction to an acceptable quality of prior images. Incorporating PDART-reconstructed prior images in the proposed MAR scheme achieved higher quality images than those by a conventional in-painting method. Furthermore, the results were comparable to the fully iterative MAR that uses high-quality TVM prior images.

  4. Multidimensional, mapping-based complex wavelet transforms.

    PubMed

    Fernandes, Felix C A; van Spaendonck, Rutger L C; Burrus, C Sidney

    2005-01-01

    Although the discrete wavelet transform (DWT) is a powerful tool for signal and image processing, it has three serious disadvantages: shift sensitivity, poor directionality, and lack of phase information. To overcome these disadvantages, we introduce multidimensional, mapping-based, complex wavelet transforms that consist of a mapping onto a complex function space followed by a DWT of the complex mapping. Unlike other popular transforms that also mitigate DWT shortcomings, the decoupled implementation of our transforms has two important advantages. First, the controllable redundancy of the mapping stage offers a balance between degree of shift sensitivity and transform redundancy. This allows us to create a directional, nonredundant, complex wavelet transform with potential benefits for image coding systems. To the best of our knowledge, no other complex wavelet transform is simultaneously directional and nonredundant. The second advantage of our approach is the flexibility to use any DWT in the transform implementation. As an example, we exploit this flexibility to create the complex double-density DWT: a shift-insensitive, directional, complex wavelet transform with a low redundancy of (3M - 1)/(2M - 1) in M dimensions. No other transform achieves all these properties at a lower redundancy, to the best of our knowledge. By exploiting the advantages of our multidimensional, mapping-based complex wavelet transforms in seismic signal-processing applications, we have demonstrated state-of-the-art results.

  5. Codestream-Based Identification of JPEG 2000 Images with Different Coding Parameters

    NASA Astrophysics Data System (ADS)

    Watanabe, Osamu; Fukuhara, Takahiro; Kiya, Hitoshi

    A method of identifying JPEG 2000 images with different coding parameters, such as code-block sizes, quantization-step sizes, and resolution levels, is presented. It does not produce false-negative matches regardless of different coding parameters (compression rate, code-block size, and discrete wavelet transform (DWT) resolutions levels) or quantization step sizes. This feature is not provided by conventional methods. Moreover, the proposed approach is fast because it uses the number of zero-bit-planes that can be extracted from the JPEG 2000 codestream by only parsing the header information without embedded block coding with optimized truncation (EBCOT) decoding. The experimental results revealed the effectiveness of image identification based on the new method.

  6. Statistical and Probabilistic Extensions to Ground Operations' Discrete Event Simulation Modeling

    NASA Technical Reports Server (NTRS)

    Trocine, Linda; Cummings, Nicholas H.; Bazzana, Ashley M.; Rychlik, Nathan; LeCroy, Kenneth L.; Cates, Grant R.

    2010-01-01

    NASA's human exploration initiatives will invest in technologies, public/private partnerships, and infrastructure, paving the way for the expansion of human civilization into the solar system and beyond. As it is has been for the past half century, the Kennedy Space Center will be the embarkation point for humankind's journey into the cosmos. Functioning as a next generation space launch complex, Kennedy's launch pads, integration facilities, processing areas, launch and recovery ranges will bustle with the activities of the world's space transportation providers. In developing this complex, KSC teams work through the potential operational scenarios: conducting trade studies, planning and budgeting for expensive and limited resources, and simulating alternative operational schemes. Numerous tools, among them discrete event simulation (DES), were matured during the Constellation Program to conduct such analyses with the purpose of optimizing the launch complex for maximum efficiency, safety, and flexibility while minimizing life cycle costs. Discrete event simulation is a computer-based modeling technique for complex and dynamic systems where the state of the system changes at discrete points in time and whose inputs may include random variables. DES is used to assess timelines and throughput, and to support operability studies and contingency analyses. It is applicable to any space launch campaign and informs decision-makers of the effects of varying numbers of expensive resources and the impact of off nominal scenarios on measures of performance. In order to develop representative DES models, methods were adopted, exploited, or created to extend traditional uses of DES. The Delphi method was adopted and utilized for task duration estimation. DES software was exploited for probabilistic event variation. A roll-up process was used, which was developed to reuse models and model elements in other less - detailed models. The DES team continues to innovate and expand DES capabilities to address KSC's planning needs.

  7. An adaptive discontinuous Galerkin solver for aerodynamic flows

    NASA Astrophysics Data System (ADS)

    Burgess, Nicholas K.

    This work considers the accuracy, efficiency, and robustness of an unstructured high-order accurate discontinuous Galerkin (DG) solver for computational fluid dynamics (CFD). Recently, there has been a drive to reduce the discretization error of CFD simulations using high-order methods on unstructured grids. However, high-order methods are often criticized for lacking robustness and having high computational cost. The goal of this work is to investigate methods that enhance the robustness of high-order discontinuous Galerkin (DG) methods on unstructured meshes, while maintaining low computational cost and high accuracy of the numerical solutions. This work investigates robustness enhancement of high-order methods by examining effective non-linear solvers, shock capturing methods, turbulence model discretizations and adaptive refinement techniques. The goal is to develop an all encompassing solver that can simulate a large range of physical phenomena, where all aspects of the solver work together to achieve a robust, efficient and accurate solution strategy. The components and framework for a robust high-order accurate solver that is capable of solving viscous, Reynolds Averaged Navier-Stokes (RANS) and shocked flows is presented. In particular, this work discusses robust discretizations of the turbulence model equation used to close the RANS equations, as well as stable shock capturing strategies that are applicable across a wide range of discretization orders and applicable to very strong shock waves. Furthermore, refinement techniques are considered as both efficiency and robustness enhancement strategies. Additionally, efficient non-linear solvers based on multigrid and Krylov subspace methods are presented. The accuracy, efficiency, and robustness of the solver is demonstrated using a variety of challenging aerodynamic test problems, which include turbulent high-lift and viscous hypersonic flows. Adaptive mesh refinement was found to play a critical role in obtaining a robust and efficient high-order accurate flow solver. A goal-oriented error estimation technique has been developed to estimate the discretization error of simulation outputs. For high-order discretizations, it is shown that functional output error super-convergence can be obtained, provided the discretization satisfies a property known as dual consistency. The dual consistency of the DG methods developed in this work is shown via mathematical analysis and numerical experimentation. Goal-oriented error estimation is also used to drive an hp-adaptive mesh refinement strategy, where a combination of mesh or h-refinement, and order or p-enrichment, is employed based on the smoothness of the solution. The results demonstrate that the combination of goal-oriented error estimation and hp-adaptation yield superior accuracy, as well as enhanced robustness and efficiency for a variety of aerodynamic flows including flows with strong shock waves. This work demonstrates that DG discretizations can be the basis of an accurate, efficient, and robust CFD solver. Furthermore, enhancing the robustness of DG methods does not adversely impact the accuracy or efficiency of the solver for challenging and complex flow problems. In particular, when considering the computation of shocked flows, this work demonstrates that the available shock capturing techniques are sufficiently accurate and robust, particularly when used in conjunction with adaptive mesh refinement . This work also demonstrates that robust solutions of the Reynolds Averaged Navier-Stokes (RANS) and turbulence model equations can be obtained for complex and challenging aerodynamic flows. In this context, the most robust strategy was determined to be a low-order turbulence model discretization coupled to a high-order discretization of the RANS equations. Although RANS solutions using high-order accurate discretizations of the turbulence model were obtained, the behavior of current-day RANS turbulence models discretized to high-order was found to be problematic, leading to solver robustness issues. This suggests that future work is warranted in the area of turbulence model formulation for use with high-order discretizations. Alternately, the use of Large-Eddy Simulation (LES) subgrid scale models with high-order DG methods offers the potential to leverage the high accuracy of these methods for very high fidelity turbulent simulations. This thesis has developed the algorithmic improvements that will lay the foundation for the development of a three-dimensional high-order flow solution strategy that can be used as the basis for future LES simulations.

  8. A compressed sensing based 3D resistivity inversion algorithm for hydrogeological applications

    NASA Astrophysics Data System (ADS)

    Ranjan, Shashi; Kambhammettu, B. V. N. P.; Peddinti, Srinivasa Rao; Adinarayana, J.

    2018-04-01

    Image reconstruction from discrete electrical responses pose a number of computational and mathematical challenges. Application of smoothness constrained regularized inversion from limited measurements may fail to detect resistivity anomalies and sharp interfaces separated by hydro stratigraphic units. Under favourable conditions, compressed sensing (CS) can be thought of an alternative to reconstruct the image features by finding sparse solutions to highly underdetermined linear systems. This paper deals with the development of a CS assisted, 3-D resistivity inversion algorithm for use with hydrogeologists and groundwater scientists. CS based l1-regularized least square algorithm was applied to solve the resistivity inversion problem. Sparseness in the model update vector is introduced through block oriented discrete cosine transformation, with recovery of the signal achieved through convex optimization. The equivalent quadratic program was solved using primal-dual interior point method. Applicability of the proposed algorithm was demonstrated using synthetic and field examples drawn from hydrogeology. The proposed algorithm has outperformed the conventional (smoothness constrained) least square method in recovering the model parameters with much fewer data, yet preserving the sharp resistivity fronts separated by geologic layers. Resistivity anomalies represented by discrete homogeneous blocks embedded in contrasting geologic layers were better imaged using the proposed algorithm. In comparison to conventional algorithm, CS has resulted in an efficient (an increase in R2 from 0.62 to 0.78; a decrease in RMSE from 125.14 Ω-m to 72.46 Ω-m), reliable, and fast converging (run time decreased by about 25%) solution.

  9. Correction of aeroheating-induced intensity nonuniformity in infrared images

    NASA Astrophysics Data System (ADS)

    Liu, Li; Yan, Luxin; Zhao, Hui; Dai, Xiaobing; Zhang, Tianxu

    2016-05-01

    Aeroheating-induced intensity nonuniformity effects severely influence the effective performance of an infrared (IR) imaging system in high-speed flight. In this paper, we propose a new approach to the correction of intensity nonuniformity in IR images. The basic assumption is that the low-frequency intensity bias is additive and smoothly varying so that it can be modeled as a bivariate polynomial and estimated by using an isotropic total variation (TV) model. A half quadratic penalty method is applied to the isotropic form of TV discretization. And an alternating minimization algorithm is adopted for solving the optimization model. The experimental results of simulated and real aerothermal images show that the proposed correction method can effectively improve IR image quality.

  10. Some fast elliptic solvers on parallel architectures and their complexities

    NASA Technical Reports Server (NTRS)

    Gallopoulos, E.; Saad, Youcef

    1989-01-01

    The discretization of separable elliptic partial differential equations leads to linear systems with special block triangular matrices. Several methods are known to solve these systems, the most general of which is the Block Cyclic Reduction (BCR) algorithm which handles equations with nonconsistant coefficients. A method was recently proposed to parallelize and vectorize BCR. Here, the mapping of BCR on distributed memory architectures is discussed, and its complexity is compared with that of other approaches, including the Alternating-Direction method. A fast parallel solver is also described, based on an explicit formula for the solution, which has parallel computational complexity lower than that of parallel BCR.

  11. Discrete Ramanujan transform for distinguishing the protein coding regions from other regions.

    PubMed

    Hua, Wei; Wang, Jiasong; Zhao, Jian

    2014-01-01

    Based on the study of Ramanujan sum and Ramanujan coefficient, this paper suggests the concepts of discrete Ramanujan transform and spectrum. Using Voss numerical representation, one maps a symbolic DNA strand as a numerical DNA sequence, and deduces the discrete Ramanujan spectrum of the numerical DNA sequence. It is well known that of discrete Fourier power spectrum of protein coding sequence has an important feature of 3-base periodicity, which is widely used for DNA sequence analysis by the technique of discrete Fourier transform. It is performed by testing the signal-to-noise ratio at frequency N/3 as a criterion for the analysis, where N is the length of the sequence. The results presented in this paper show that the property of 3-base periodicity can be only identified as a prominent spike of the discrete Ramanujan spectrum at period 3 for the protein coding regions. The signal-to-noise ratio for discrete Ramanujan spectrum is defined for numerical measurement. Therefore, the discrete Ramanujan spectrum and the signal-to-noise ratio of a DNA sequence can be used for distinguishing the protein coding regions from the noncoding regions. All the exon and intron sequences in whole chromosomes 1, 2, 3 and 4 of Caenorhabditis elegans have been tested and the histograms and tables from the computational results illustrate the reliability of our method. In addition, we have analyzed theoretically and gotten the conclusion that the algorithm for calculating discrete Ramanujan spectrum owns the lower computational complexity and higher computational accuracy. The computational experiments show that the technique by using discrete Ramanujan spectrum for classifying different DNA sequences is a fast and effective method. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Recognition of pornographic web pages by classifying texts and images.

    PubMed

    Hu, Weiming; Wu, Ou; Chen, Zhouyao; Fu, Zhouyu; Maybank, Steve

    2007-06-01

    With the rapid development of the World Wide Web, people benefit more and more from the sharing of information. However, Web pages with obscene, harmful, or illegal content can be easily accessed. It is important to recognize such unsuitable, offensive, or pornographic Web pages. In this paper, a novel framework for recognizing pornographic Web pages is described. A C4.5 decision tree is used to divide Web pages, according to content representations, into continuous text pages, discrete text pages, and image pages. These three categories of Web pages are handled, respectively, by a continuous text classifier, a discrete text classifier, and an algorithm that fuses the results from the image classifier and the discrete text classifier. In the continuous text classifier, statistical and semantic features are used to recognize pornographic texts. In the discrete text classifier, the naive Bayes rule is used to calculate the probability that a discrete text is pornographic. In the image classifier, the object's contour-based features are extracted to recognize pornographic images. In the text and image fusion algorithm, the Bayes theory is used to combine the recognition results from images and texts. Experimental results demonstrate that the continuous text classifier outperforms the traditional keyword-statistics-based classifier, the contour-based image classifier outperforms the traditional skin-region-based image classifier, the results obtained by our fusion algorithm outperform those by either of the individual classifiers, and our framework can be adapted to different categories of Web pages.

  13. Geometry Of Discrete Sets With Applications To Pattern Recognition

    NASA Astrophysics Data System (ADS)

    Sinha, Divyendu

    1990-03-01

    In this paper we present a new framework for discrete black and white images that employs only integer arithmetic. This framework is shown to retain the essential characteristics of the framework for Euclidean images. We propose two norms and based on them, the permissible geometric operations on images are defined. The basic invariants of our geometry are line images, structure of image and the corresponding local property of strong attachment of pixels. The permissible operations also preserve the 3x3 neighborhoods, area, and perpendicularity. The structure, patterns, and the inter-pattern gaps in a discrete image are shown to be conserved by the magnification and contraction process. Our notions of approximate congruence, similarity and symmetry are similar, in character, to the corresponding notions, for Euclidean images [1]. We mention two discrete pattern recognition algorithms that work purely with integers, and which fit into our framework. Their performance has been shown to be at par with the performance of traditional geometric schemes. Also, all the undesired effects of finite length registers in fixed point arithmetic that plague traditional algorithms, are non-existent in this family of algorithms.

  14. Biomolecular Assembly of Gold Nanocrystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Micheel, Christine Marya

    2005-05-20

    Over the past ten years, methods have been developed to construct discrete nanostructures using nanocrystals and biomolecules. While these frequently consist of gold nanocrystals and DNA, semiconductor nanocrystals as well as antibodies and enzymes have also been used. One example of discrete nanostructures is dimers of gold nanocrystals linked together with complementary DNA. This type of nanostructure is also known as a nanocrystal molecule. Discrete nanostructures of this kind have a number of potential applications, from highly parallel self-assembly of electronics components and rapid read-out of DNA computations to biological imaging and a variety of bioassays. My research focused inmore » three main areas. The first area, the refinement of electrophoresis as a purification and characterization method, included application of agarose gel electrophoresis to the purification of discrete gold nanocrystal/DNA conjugates and nanocrystal molecules, as well as development of a more detailed understanding of the hydrodynamic behavior of these materials in gels. The second area, the development of methods for quantitative analysis of transmission electron microscope data, used computer programs written to find pair correlations as well as higher order correlations. With these programs, it is possible to reliably locate and measure nanocrystal molecules in TEM images. The final area of research explored the use of DNA ligase in the formation of nanocrystal molecules. Synthesis of dimers of gold particles linked with a single strand of DNA possible through the use of DNA ligase opens the possibility for amplification of nanostructures in a manner similar to polymerase chain reaction. These three areas are discussed in the context of the work in the Alivisatos group, as well as the field as a whole.« less

  15. A Fast and Robust Beamspace Adaptive Beamformer for Medical Ultrasound Imaging.

    PubMed

    Mohades Deylami, Ali; Mohammadzadeh Asl, Babak

    2017-06-01

    Minimum variance beamformer (MVB) increases the resolution and contrast of medical ultrasound imaging compared with nonadaptive beamformers. These advantages come at the expense of high computational complexity that prevents this adaptive beamformer to be applied in a real-time imaging system. A new beamspace (BS) based on discrete cosine transform is proposed in which the medical ultrasound signals can be represented with less dimensions compared with the standard BS. This is because of symmetric beampattern of the beams in the proposed BS compared with the asymmetric ones in the standard BS. This lets us decrease the dimensions of data to two, so a high complex algorithm, such as the MVB, can be applied faster in this BS. The results indicated that by keeping only two beams, the MVB in the proposed BS provides very similar resolution and also better contrast compared with the standard MVB (SMVB) with only 0.44% of needed flops. Also, this beamformer is more robust against sound speed estimation errors than the SMVB.

  16. Comparison of Node-Centered and Cell-Centered Unstructured Finite-Volume Discretizations. Part 1; Viscous Fluxes

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.; Nielsen, Eric J.; Nishikawa, Hiroaki; White, Jeffery A.

    2009-01-01

    Discretization of the viscous terms in current finite-volume unstructured-grid schemes are compared using node-centered and cell-centered approaches in two dimensions. Accuracy and efficiency are studied for six nominally second-order accurate schemes: a node-centered scheme, cell-centered node-averaging schemes with and without clipping, and cell-centered schemes with unweighted, weighted, and approximately mapped least-square face gradient reconstruction. The grids considered range from structured (regular) grids to irregular grids composed of arbitrary mixtures of triangles and quadrilaterals, including random perturbations of the grid points to bring out the worst possible behavior of the solution. Two classes of tests are considered. The first class of tests involves smooth manufactured solutions on both isotropic and highly anisotropic grids with discontinuous metrics, typical of those encountered in grid adaptation. The second class concerns solutions and grids varying strongly anisotropically over a curved body, typical of those encountered in high-Reynolds number turbulent flow simulations. Results from the first class indicate the face least-square methods, the node-averaging method without clipping, and the node-centered method demonstrate second-order convergence of discretization errors with very similar accuracies per degree of freedom. The second class of tests are more discriminating. The node-centered scheme is always second order with an accuracy and complexity in linearization comparable to the best of the cell-centered schemes. In comparison, the cell-centered node-averaging schemes are less accurate, have a higher complexity in linearization, and can fail to converge to the exact solution when clipping of the node-averaged values is used. The cell-centered schemes using least-square face gradient reconstruction have more compact stencils with a complexity similar to the complexity of the node-centered scheme. For simulations on highly anisotropic curved grids, the least-square methods have to be amended either by introducing a local mapping of the surface anisotropy or modifying the scheme stencil to reflect the direction of strong coupling.

  17. Image data compression having minimum perceptual error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1995-01-01

    A method for performing image compression that eliminates redundant and invisible image components is described. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  18. On the predictivity of pore-scale simulations: Estimating uncertainties with multilevel Monte Carlo

    NASA Astrophysics Data System (ADS)

    Icardi, Matteo; Boccardo, Gianluca; Tempone, Raúl

    2016-09-01

    A fast method with tunable accuracy is proposed to estimate errors and uncertainties in pore-scale and Digital Rock Physics (DRP) problems. The overall predictivity of these studies can be, in fact, hindered by many factors including sample heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity and hydrodynamic dispersion. However, these are often non-deterministic quantities (i.e., results obtained for specific pore-scale sample and setup are not totally reproducible by another ;equivalent; sample and setup). The stochastic nature can arise due to the multi-scale heterogeneity, the computational and experimental limitations in considering large samples, and the complexity of the physical models. These approximations, in fact, introduce an error that, being dependent on a large number of complex factors, can be modeled as random. We propose a general simulation tool, based on multilevel Monte Carlo, that can reduce drastically the computational cost needed for computing accurate statistics of effective parameters and other quantities of interest, under any of these random errors. This is, to our knowledge, the first attempt to include Uncertainty Quantification (UQ) in pore-scale physics and simulation. The method can also provide estimates of the discretization error and it is tested on three-dimensional transport problems in heterogeneous materials, where the sampling procedure is done by generation algorithms able to reproduce realistic consolidated and unconsolidated random sphere and ellipsoid packings and arrangements. A totally automatic workflow is developed in an open-source code [1], that include rigid body physics and random packing algorithms, unstructured mesh discretization, finite volume solvers, extrapolation and post-processing techniques. The proposed method can be efficiently used in many porous media applications for problems such as stochastic homogenization/upscaling, propagation of uncertainty from microscopic fluid and rock properties to macro-scale parameters, robust estimation of Representative Elementary Volume size for arbitrary physics.

  19. Histogram of gradient and binarized statistical image features of wavelet subband-based palmprint features extraction

    NASA Astrophysics Data System (ADS)

    Attallah, Bilal; Serir, Amina; Chahir, Youssef; Boudjelal, Abdelwahhab

    2017-11-01

    Palmprint recognition systems are dependent on feature extraction. A method of feature extraction using higher discrimination information was developed to characterize palmprint images. In this method, two individual feature extraction techniques are applied to a discrete wavelet transform of a palmprint image, and their outputs are fused. The two techniques used in the fusion are the histogram of gradient and the binarized statistical image features. They are then evaluated using an extreme learning machine classifier before selecting a feature based on principal component analysis. Three palmprint databases, the Hong Kong Polytechnic University (PolyU) Multispectral Palmprint Database, Hong Kong PolyU Palmprint Database II, and the Delhi Touchless (IIDT) Palmprint Database, are used in this study. The study shows that our method effectively identifies and verifies palmprints and outperforms other methods based on feature extraction.

  20. Image splitting and remapping method for radiological image compression

    NASA Astrophysics Data System (ADS)

    Lo, Shih-Chung B.; Shen, Ellen L.; Mun, Seong K.

    1990-07-01

    A new decomposition method using image splitting and gray-level remapping has been proposed for image compression, particularly for images with high contrast resolution. The effects of this method are especially evident in our radiological image compression study. In our experiments, we tested the impact of this decomposition method on image compression by employing it with two coding techniques on a set of clinically used CT images and several laser film digitized chest radiographs. One of the compression techniques used was full-frame bit-allocation in the discrete cosine transform domain, which has been proven to be an effective technique for radiological image compression. The other compression technique used was vector quantization with pruned tree-structured encoding, which through recent research has also been found to produce a low mean-square-error and a high compression ratio. The parameters we used in this study were mean-square-error and the bit rate required for the compressed file. In addition to these parameters, the difference between the original and reconstructed images will be presented so that the specific artifacts generated by both techniques can be discerned by visual perception.

  1. AutoLens: Automated Modeling of a Strong Lens's Light, Mass and Source

    NASA Astrophysics Data System (ADS)

    Nightingale, J. W.; Dye, S.; Massey, Richard J.

    2018-05-01

    This work presents AutoLens, the first entirely automated modeling suite for the analysis of galaxy-scale strong gravitational lenses. AutoLens simultaneously models the lens galaxy's light and mass whilst reconstructing the extended source galaxy on an adaptive pixel-grid. The method's approach to source-plane discretization is amorphous, adapting its clustering and regularization to the intrinsic properties of the lensed source. The lens's light is fitted using a superposition of Sersic functions, allowing AutoLens to cleanly deblend its light from the source. Single component mass models representing the lens's total mass density profile are demonstrated, which in conjunction with light modeling can detect central images using a centrally cored profile. Decomposed mass modeling is also shown, which can fully decouple a lens's light and dark matter and determine whether the two component are geometrically aligned. The complexity of the light and mass models are automatically chosen via Bayesian model comparison. These steps form AutoLens's automated analysis pipeline, such that all results in this work are generated without any user-intervention. This is rigorously tested on a large suite of simulated images, assessing its performance on a broad range of lens profiles, source morphologies and lensing geometries. The method's performance is excellent, with accurate light, mass and source profiles inferred for data sets representative of both existing Hubble imaging and future Euclid wide-field observations.

  2. Contact-aware simulations of particulate Stokesian suspensions

    NASA Astrophysics Data System (ADS)

    Lu, Libin; Rahimian, Abtin; Zorin, Denis

    2017-10-01

    We present an efficient, accurate, and robust method for simulation of dense suspensions of deformable and rigid particles immersed in Stokesian fluid in two dimensions. We use a well-established boundary integral formulation for the problem as the foundation of our approach. This type of formulation, with a high-order spatial discretization and an implicit and adaptive time discretization, have been shown to be able to handle complex interactions between particles with high accuracy. Yet, for dense suspensions, very small time-steps or expensive implicit solves as well as a large number of discretization points are required to avoid non-physical contact and intersections between particles, leading to infinite forces and numerical instability. Our method maintains the accuracy of previous methods at a significantly lower cost for dense suspensions. The key idea is to ensure interference-free configuration by introducing explicit contact constraints into the system. While such constraints are unnecessary in the formulation, in the discrete form of the problem, they make it possible to eliminate catastrophic loss of accuracy by preventing contact explicitly. Introducing contact constraints results in a significant increase in stable time-step size for explicit time-stepping, and a reduction in the number of points adequate for stability.

  3. Optical image encryption scheme with multiple light paths based on compressive ghost imaging

    NASA Astrophysics Data System (ADS)

    Zhu, Jinan; Yang, Xiulun; Meng, Xiangfeng; Wang, Yurong; Yin, Yongkai; Sun, Xiaowen; Dong, Guoyan

    2018-02-01

    An optical image encryption method with multiple light paths is proposed based on compressive ghost imaging. In the encryption process, M random phase-only masks (POMs) are generated by means of logistic map algorithm, and these masks are then uploaded to the spatial light modulator (SLM). The collimated laser light is divided into several beams by beam splitters as it passes through the SLM, and the light beams illuminate the secret images, which are converted into sparse images by discrete wavelet transform beforehand. Thus, the secret images are simultaneously encrypted into intensity vectors by ghost imaging. The distances between the SLM and secret images vary and can be used as the main keys with original POM and the logistic map algorithm coefficient in the decryption process. In the proposed method, the storage space can be significantly decreased and the security of the system can be improved. The feasibility, security and robustness of the method are further analysed through computer simulations.

  4. Application of Conjugate Gradient methods to tidal simulation

    USGS Publications Warehouse

    Barragy, E.; Carey, G.F.; Walters, R.A.

    1993-01-01

    A harmonic decomposition technique is applied to the shallow water equations to yield a complex, nonsymmetric, nonlinear, Helmholtz type problem for the sea surface and an accompanying complex, nonlinear diagonal problem for the velocities. The equation for the sea surface is linearized using successive approximation and then discretized with linear, triangular finite elements. The study focuses on applying iterative methods to solve the resulting complex linear systems. The comparative evaluation includes both standard iterative methods for the real subsystems and complex versions of the well known Bi-Conjugate Gradient and Bi-Conjugate Gradient Squared methods. Several Incomplete LU type preconditioners are discussed, and the effects of node ordering, rejection strategy, domain geometry and Coriolis parameter (affecting asymmetry) are investigated. Implementation details for the complex case are discussed. Performance studies are presented and comparisons made with a frontal solver. ?? 1993.

  5. Image coding of SAR imagery

    NASA Technical Reports Server (NTRS)

    Chang, C. Y.; Kwok, R.; Curlander, J. C.

    1987-01-01

    Five coding techniques in the spatial and transform domains have been evaluated for SAR image compression: linear three-point predictor (LTPP), block truncation coding (BTC), microadaptive picture sequencing (MAPS), adaptive discrete cosine transform (ADCT), and adaptive Hadamard transform (AHT). These techniques have been tested with Seasat data. Both LTPP and BTC spatial domain coding techniques provide very good performance at rates of 1-2 bits/pixel. The two transform techniques, ADCT and AHT, demonstrate the capability to compress the SAR imagery to less than 0.5 bits/pixel without visible artifacts. Tradeoffs such as the rate distortion performance, the computational complexity, the algorithm flexibility, and the controllability of compression ratios are also discussed.

  6. Simulation studies of vestibular macular afferent-discharge patterns using a new, quasi-3-D finite volume method

    NASA Technical Reports Server (NTRS)

    Ross, M. D.; Linton, S. W.; Parnas, B. R.

    2000-01-01

    A quasi-three-dimensional finite-volume numerical simulator was developed to study passive voltage spread in vestibular macular afferents. The method, borrowed from computational fluid dynamics, discretizes events transpiring in small volumes over time. The afferent simulated had three calyces with processes. The number of processes and synapses, and direction and timing of synapse activation, were varied. Simultaneous synapse activation resulted in shortest latency, while directional activation (proximal to distal and distal to proximal) yielded most regular discharges. Color-coded visualizations showed that the simulator discretized events and demonstrated that discharge produced a distal spread of voltage from the spike initiator into the ending. The simulations indicate that directional input, morphology, and timing of synapse activation can affect discharge properties, as must also distal spread of voltage from the spike initiator. The finite volume method has generality and can be applied to more complex neurons to explore discrete synaptic effects in four dimensions.

  7. Coupling of in-situ X-ray Microtomography Observations with Discrete Element Simulations-Application to Powder Sintering

    NASA Astrophysics Data System (ADS)

    Olmos, L.; Bouvard, D.; Martin, C. L.; Bellet, D.; Di Michiel, M.

    2009-06-01

    The sintering of both a powder with a wide particle size distribution (0-63 μm) and of a powder with artificially created pores is investigated by coupling in situ X-ray microtomography observations with Discrete Element simulations. The micro structure evolution of the copper particles is observed by microtomography all along a typical sintering cycle at 1050° C at the European Synchrotron Research Facilities (ESRF, Grenoble, France). A quantitative analysis of the 3D images provides original data on interparticle indentation, coordination and particle displacements throughout sintering. In parallel, the sintering of similar powder systems has been simulated with a discrete element code which incorporates appropriate sintering contact laws from the literature. The initial numerical packing is generated directly from the 3D microtomography images or alternatively from a random set of particles with the same size distribution. The comparison between the information drawn from the simulations and the one obtained by tomography leads to the conclusion that the first method is not satisfactory because real particles are not perfectly spherical as the numerical ones. On the opposite the packings built with the second method show sintering behaviors close to the behaviors of real materials, although particle rearrangement is underestimated by DEM simulations.

  8. Modeling the Interaction Between Hydraulic and Natural Fractures Using Dual-Lattice Discrete Element Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Jing; Huang, Hai; Deo, Milind

    The interaction between hydraulic fractures (HF) and natural fractures (NF) will lead to complex fracture networks due to the branching and merging of natural and hydraulic fractures in unconventional reservoirs. In this paper, a newly developed hydraulic fracturing simulator based on discrete element method is used to predict the generation of complex fracture network in the presence of pre-existing natural fractures. By coupling geomechanics and reservoir flow within a dual lattice system, this simulator can effectively capture the poro-elastic effects and fluid leakoff into the formation. When HFs are intercepting single or multiple NFs, complex mechanisms such as direct crossing,more » arresting, dilating and branching can be simulated. Based on the model, the effects of injected fluid rate and viscosity, the orientation and permeability of NFs and stress anisotropy on the HF-NF interaction process are investigated. Combined impacts from multiple parameters are also examined in the paper. The numerical results show that large values of stress anisotropy, intercepting angle, injection rate and viscosity will impede the opening of NFs.« less

  9. Feature extraction using extrema sampling of discrete derivatives for spike sorting in implantable upper-limb neural prostheses.

    PubMed

    Zamani, Majid; Demosthenous, Andreas

    2014-07-01

    Next generation neural interfaces for upper-limb (and other) prostheses aim to develop implantable interfaces for one or more nerves, each interface having many neural signal channels that work reliably in the stump without harming the nerves. To achieve real-time multi-channel processing it is important to integrate spike sorting on-chip to overcome limitations in transmission bandwidth. This requires computationally efficient algorithms for feature extraction and clustering suitable for low-power hardware implementation. This paper describes a new feature extraction method for real-time spike sorting based on extrema analysis (namely positive peaks and negative peaks) of spike shapes and their discrete derivatives at different frequency bands. Employing simulation across different datasets, the accuracy and computational complexity of the proposed method are assessed and compared with other methods. The average classification accuracy of the proposed method in conjunction with online sorting (O-Sort) is 91.6%, outperforming all the other methods tested with the O-Sort clustering algorithm. The proposed method offers a better tradeoff between classification error and computational complexity, making it a particularly strong choice for on-chip spike sorting.

  10. Method and system for progressive mesh storage and reconstruction using wavelet-encoded height fields

    NASA Technical Reports Server (NTRS)

    Baxes, Gregory A. (Inventor); Linger, Timothy C. (Inventor)

    2011-01-01

    Systems and methods are provided for progressive mesh storage and reconstruction using wavelet-encoded height fields. A method for progressive mesh storage includes reading raster height field data, and processing the raster height field data with a discrete wavelet transform to generate wavelet-encoded height fields. In another embodiment, a method for progressive mesh storage includes reading texture map data, and processing the texture map data with a discrete wavelet transform to generate wavelet-encoded texture map fields. A method for reconstructing a progressive mesh from wavelet-encoded height field data includes determining terrain blocks, and a level of detail required for each terrain block, based upon a viewpoint. Triangle strip constructs are generated from vertices of the terrain blocks, and an image is rendered utilizing the triangle strip constructs. Software products that implement these methods are provided.

  11. Method and system for progressive mesh storage and reconstruction using wavelet-encoded height fields

    NASA Technical Reports Server (NTRS)

    Baxes, Gregory A. (Inventor)

    2010-01-01

    Systems and methods are provided for progressive mesh storage and reconstruction using wavelet-encoded height fields. A method for progressive mesh storage includes reading raster height field data, and processing the raster height field data with a discrete wavelet transform to generate wavelet-encoded height fields. In another embodiment, a method for progressive mesh storage includes reading texture map data, and processing the texture map data with a discrete wavelet transform to generate wavelet-encoded texture map fields. A method for reconstructing a progressive mesh from wavelet-encoded height field data includes determining terrain blocks, and a level of detail required for each terrain block, based upon a viewpoint. Triangle strip constructs are generated from vertices of the terrain blocks, and an image is rendered utilizing the triangle strip constructs. Software products that implement these methods are provided.

  12. Discrete Fourier Transform-Based Multivariate Image Analysis: Application to Modeling of Aromatase Inhibitory Activity.

    PubMed

    Barigye, Stephen J; Freitas, Matheus P; Ausina, Priscila; Zancan, Patricia; Sola-Penna, Mauro; Castillo-Garit, Juan A

    2018-02-12

    We recently generalized the formerly alignment-dependent multivariate image analysis applied to quantitative structure-activity relationships (MIA-QSAR) method through the application of the discrete Fourier transform (DFT), allowing for its application to noncongruent and structurally diverse chemical compound data sets. Here we report the first practical application of this method in the screening of molecular entities of therapeutic interest, with human aromatase inhibitory activity as the case study. We developed an ensemble classification model based on the two-dimensional (2D) DFT MIA-QSAR descriptors, with which we screened the NCI Diversity Set V (1593 compounds) and obtained 34 chemical compounds with possible aromatase inhibitory activity. These compounds were docked into the aromatase active site, and the 10 most promising compounds were selected for in vitro experimental validation. Of these compounds, 7419 (nonsteroidal) and 89 201 (steroidal) demonstrated satisfactory antiproliferative and aromatase inhibitory activities. The obtained results suggest that the 2D-DFT MIA-QSAR method may be useful in ligand-based virtual screening of new molecular entities of therapeutic utility.

  13. Multiclass Data Segmentation Using Diffuse Interface Methods on Graphs

    DTIC Science & Technology

    2014-01-01

    interac- tive image segmentation using the solution to a combinatorial Dirichlet problem. Elmoataz et al . have developed general- izations of the graph...Laplacian [25] for image denoising and manifold smoothing. Couprie et al . in [18] define a conve- niently parameterized graph-based energy function that...over to the discrete graph representation. For general data segmentation, Bresson et al . in [8], present rigorous convergence results for two algorithms

  14. Exploring the effects of transducer models when training convolutional neural networks to eliminate reflection artifacts in experimental photoacoustic images

    NASA Astrophysics Data System (ADS)

    Allman, Derek; Reiter, Austin; Bell, Muyinatu

    2018-02-01

    We previously proposed a method of removing reflection artifacts in photoacoustic images that uses deep learning. Our approach generally relies on using simulated photoacoustic channel data to train a convolutional neural network (CNN) that is capable of distinguishing sources from artifacts based on unique differences in their spatial impulse responses (manifested as depth-based differences in wavefront shapes). In this paper, we directly compare a CNN trained with our previous continuous transducer model to a CNN trained with an updated discrete acoustic receiver model that more closely matches an experimental ultrasound transducer. These two CNNs were trained with simulated data and tested on experimental data. The CNN trained using the continuous receiver model correctly classified 100% of sources and 70.3% of artifacts in the experimental data. In contrast, the CNN trained using the discrete receiver model correctly classified 100% of sources and 89.7% of artifacts in the experimental images. The 19.4% increase in artifact classification accuracy indicates that an acoustic receiver model that closely mimics the experimental transducer plays an important role in improving the classification of artifacts in experimental photoacoustic data. Results are promising for developing a method to display CNN-based images that remove artifacts in addition to only displaying network-identified sources as previously proposed.

  15. Vision based obstacle detection and grouping for helicopter guidance

    NASA Technical Reports Server (NTRS)

    Sridhar, Banavar; Chatterji, Gano

    1993-01-01

    Electro-optical sensors can be used to compute range to objects in the flight path of a helicopter. The computation is based on the optical flow/motion at different points in the image. The motion algorithms provide a sparse set of ranges to discrete features in the image sequence as a function of azimuth and elevation. For obstacle avoidance guidance and display purposes, these discrete set of ranges, varying from a few hundreds to several thousands, need to be grouped into sets which correspond to objects in the real world. This paper presents a new method for object segmentation based on clustering the sparse range information provided by motion algorithms together with the spatial relation provided by the static image. The range values are initially grouped into clusters based on depth. Subsequently, the clusters are modified by using the K-means algorithm in the inertial horizontal plane and the minimum spanning tree algorithms in the image plane. The object grouping allows interpolation within a group and enables the creation of dense range maps. Researchers in robotics have used densely scanned sequence of laser range images to build three-dimensional representation of the outside world. Thus, modeling techniques developed for dense range images can be extended to sparse range images. The paper presents object segmentation results for a sequence of flight images.

  16. A finite elements method to solve the Bloch-Torrey equation applied to diffusion magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Nguyen, Dang Van; Li, Jing-Rebecca; Grebenkov, Denis; Le Bihan, Denis

    2014-04-01

    The complex transverse water proton magnetization subject to diffusion-encoding magnetic field gradient pulses in a heterogeneous medium can be modeled by the multiple compartment Bloch-Torrey partial differential equation (PDE). In addition, steady-state Laplace PDEs can be formulated to produce the homogenized diffusion tensor that describes the diffusion characteristics of the medium in the long time limit. In spatial domains that model biological tissues at the cellular level, these two types of PDEs have to be completed with permeability conditions on the cellular interfaces. To solve these PDEs, we implemented a finite elements method that allows jumps in the solution at the cell interfaces by using double nodes. Using a transformation of the Bloch-Torrey PDE we reduced oscillations in the searched-for solution and simplified the implementation of the boundary conditions. The spatial discretization was then coupled to the adaptive explicit Runge-Kutta-Chebyshev time-stepping method. Our proposed method is second order accurate in space and second order accurate in time. We implemented this method on the FEniCS C++ platform and show time and spatial convergence results. Finally, this method is applied to study some relevant questions in diffusion MRI.

  17. Efficient Relaxation of Protein-Protein Interfaces by Discrete Molecular Dynamics Simulations.

    PubMed

    Emperador, Agusti; Solernou, Albert; Sfriso, Pedro; Pons, Carles; Gelpi, Josep Lluis; Fernandez-Recio, Juan; Orozco, Modesto

    2013-02-12

    Protein-protein interactions are responsible for the transfer of information inside the cell and represent one of the most interesting research fields in structural biology. Unfortunately, after decades of intense research, experimental approaches still have difficulties in providing 3D structures for the hundreds of thousands of interactions formed between the different proteins in a living organism. The use of theoretical approaches like docking aims to complement experimental efforts to represent the structure of the protein interactome. However, we cannot ignore that current methods have limitations due to problems of sampling of the protein-protein conformational space and the lack of accuracy of available force fields. Cases that are especially difficult for prediction are those in which complex formation implies a non-negligible change in the conformation of the interacting proteins, i.e., those cases where protein flexibility plays a key role in protein-protein docking. In this work, we present a new approach to treat flexibility in docking by global structural relaxation based on ultrafast discrete molecular dynamics. On a standard benchmark of protein complexes, the method provides a general improvement over the results obtained by rigid docking. The method is especially efficient in cases with large conformational changes upon binding, in which structure relaxation with discrete molecular dynamics leads to a predictive success rate double that obtained with state-of-the-art rigid-body docking.

  18. Image encryption technique based on new two-dimensional fractional-order discrete chaotic map and Menezes–Vanstone elliptic curve cryptosystem

    NASA Astrophysics Data System (ADS)

    Liu, Zeyu; Xia, Tiecheng; Wang, Jinbo

    2018-03-01

    We propose a new fractional two-dimensional triangle function combination discrete chaotic map (2D-TFCDM) with the discrete fractional difference. Moreover, the chaos behaviors of the proposed map are observed and the bifurcation diagrams, the largest Lyapunov exponent plot, and the phase portraits are derived, respectively. Finally, with the secret keys generated by Menezes–Vanstone elliptic curve cryptosystem, we apply the discrete fractional map into color image encryption. After that, the image encryption algorithm is analyzed in four aspects and the result indicates that the proposed algorithm is more superior than the other algorithms. Project supported by the National Natural Science Foundation of China (Grant Nos. 61072147 and 11271008).

  19. An analysis of numerical convergence in discrete velocity gas dynamics for internal flows

    NASA Astrophysics Data System (ADS)

    Sekaran, Aarthi; Varghese, Philip; Goldstein, David

    2018-07-01

    The Discrete Velocity Method (DVM) for solving the Boltzmann equation has significant advantages in the modeling of non-equilibrium and near equilibrium flows as compared to other methods in terms of reduced statistical noise, faster solutions and the ability to handle transient flows. Yet the DVM performance for rarefied flow in complex, small-scale geometries, in microelectromechanical (MEMS) devices for instance, is yet to be studied in detail. The present study focuses on the performance of the DVM for locally large Knudsen number flows of argon around sharp corners and other sources for discontinuities in the distribution function. Our analysis details the nature of the solution for some benchmark cases and introduces the concept of solution convergence for the transport terms in the discrete velocity Boltzmann equation. The limiting effects of the velocity space discretization are also investigated and the constraints on obtaining a robust, consistent solution are derived. We propose techniques to maintain solution convergence and demonstrate the implementation of a specific strategy and its effect on the fidelity of the solution for some benchmark cases.

  20. On the Importance of the Dynamics of Discretizations

    NASA Technical Reports Server (NTRS)

    Sweby, Peter K.; Yee, H. C.; Rai, ManMohan (Technical Monitor)

    1995-01-01

    It has been realized recently that the discrete maps resulting from numerical discretizations of differential equations can possess asymptotic dynamical behavior quite different from that of the original systems. This is the case not only for systems of Ordinary Differential Equations (ODEs) but in a more complicated manner for Partial Differential Equations (PDEs) used to model complex physics. The impact of the modified dynamics may be mild and even not observed for some numerical methods. For other classes of discretizations the impact may be pronounced, but not always obvious depending on the nonlinear model equations, the time steps, the grid spacings and the initial conditions. Non-convergence or convergence to periodic solutions might be easily recognizable but convergence to incorrect but plausible solutions may not be so obvious - even for discretized parameters within the linearized stability constraint. Based on our past four years of research, we will illustrate some of the pathology of the dynamics of discretizations, its possible impact and the usage of these schemes for model nonlinear ODEs, convection-diffusion equations and grid adaptations.

  1. Superfast algorithms of multidimensional discrete k-wave transforms and Volterra filtering based on superfast radon transform

    NASA Astrophysics Data System (ADS)

    Labunets, Valeri G.; Labunets-Rundblad, Ekaterina V.; Astola, Jaakko T.

    2001-12-01

    Fast algorithms for a wide class of non-separable n-dimensional (nD) discrete unitary K-transforms (DKT) are introduced. They need less 1D DKTs than in the case of the classical radix-2 FFT-type approach. The method utilizes a decomposition of the nD K-transform into the product of a new nD discrete Radon transform and of a set of parallel/independ 1D K-transforms. If the nD K-transform has a separable kernel (e.g., the case of the discrete Fourier transform) our approach leads to decrease of multiplicative complexity by the factor of n comparing to the classical row/column separable approach. It is well known that an n-th order Volterra filter of one dimensional signal can be evaluated by an appropriate nD linear convolution. This work describes new superfast algorithm for Volterra filtering. New approach is based on the superfast discrete Radon and Nussbaumer polynomial transforms.

  2. Simulation of Hydraulic and Natural Fracture Interaction Using a Coupled DFN-DEM Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, J.; Huang, H.; Deo, M.

    2016-03-01

    The presence of natural fractures will usually result in a complex fracture network due to the interactions between hydraulic and natural fracture. The reactivation of natural fractures can generally provide additional flow paths from formation to wellbore which play a crucial role in improving the hydrocarbon recovery in these ultra-low permeability reservoir. Thus, accurate description of the geometry of discrete fractures and bedding is highly desired for accurate flow and production predictions. Compared to conventional continuum models that implicitly represent the discrete feature, Discrete Fracture Network (DFN) models could realistically model the connectivity of discontinuities at both reservoir scale andmore » well scale. In this work, a new hybrid numerical model that couples Discrete Fracture Network (DFN) and Dual-Lattice Discrete Element Method (DL-DEM) is proposed to investigate the interaction between hydraulic fracture and natural fractures. Based on the proposed model, the effects of natural fracture orientation, density and injection properties on hydraulic-natural fractures interaction are investigated.« less

  3. Simulation of Hydraulic and Natural Fracture Interaction Using a Coupled DFN-DEM Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. Zhou; H. Huang; M. Deo

    The presence of natural fractures will usually result in a complex fracture network due to the interactions between hydraulic and natural fracture. The reactivation of natural fractures can generally provide additional flow paths from formation to wellbore which play a crucial role in improving the hydrocarbon recovery in these ultra-low permeability reservoir. Thus, accurate description of the geometry of discrete fractures and bedding is highly desired for accurate flow and production predictions. Compared to conventional continuum models that implicitly represent the discrete feature, Discrete Fracture Network (DFN) models could realistically model the connectivity of discontinuities at both reservoir scale andmore » well scale. In this work, a new hybrid numerical model that couples Discrete Fracture Network (DFN) and Dual-Lattice Discrete Element Method (DL-DEM) is proposed to investigate the interaction between hydraulic fracture and natural fractures. Based on the proposed model, the effects of natural fracture orientation, density and injection properties on hydraulic-natural fractures interaction are investigated.« less

  4. Iterative optimization method for design of quantitative magnetization transfer imaging experiments.

    PubMed

    Levesque, Ives R; Sled, John G; Pike, G Bruce

    2011-09-01

    Quantitative magnetization transfer imaging (QMTI) using spoiled gradient echo sequences with pulsed off-resonance saturation can be a time-consuming technique. A method is presented for selection of an optimum experimental design for quantitative magnetization transfer imaging based on the iterative reduction of a discrete sampling of the Z-spectrum. The applicability of the technique is demonstrated for human brain white matter imaging at 1.5 T and 3 T, and optimal designs are produced to target specific model parameters. The optimal number of measurements and the signal-to-noise ratio required for stable parameter estimation are also investigated. In vivo imaging results demonstrate that this optimal design approach substantially improves parameter map quality. The iterative method presented here provides an advantage over free form optimal design methods, in that pragmatic design constraints are readily incorporated. In particular, the presented method avoids clustering and repeated measures in the final experimental design, an attractive feature for the purpose of magnetization transfer model validation. The iterative optimal design technique is general and can be applied to any method of quantitative magnetization transfer imaging. Copyright © 2011 Wiley-Liss, Inc.

  5. A high-throughput two channel discrete wavelet transform architecture for the JPEG2000 standard

    NASA Astrophysics Data System (ADS)

    Badakhshannoory, Hossein; Hashemi, Mahmoud R.; Aminlou, Alireza; Fatemi, Omid

    2005-07-01

    The Discrete Wavelet Transform (DWT) is increasingly recognized in image and video compression standards, as indicated by its use in JPEG2000. The lifting scheme algorithm is an alternative DWT implementation that has a lower computational complexity and reduced resource requirement. In the JPEG2000 standard two lifting scheme based filter banks are introduced: the 5/3 and 9/7. In this paper a high throughput, two channel DWT architecture for both of the JPEG2000 DWT filters is presented. The proposed pipelined architecture has two separate input channels that process the incoming samples simultaneously with minimum memory requirement for each channel. The architecture had been implemented in VHDL and synthesized on a Xilinx Virtex2 XCV1000. The proposed architecture applies DWT on a 2K by 1K image at 33 fps with a 75 MHZ clock frequency. This performance is achieved with 70% less resources than two independent single channel modules. The high throughput and reduced resource requirement has made this architecture the proper choice for real time applications such as Digital Cinema.

  6. Video Extrapolation Method Based on Time-Varying Energy Optimization and CIP.

    PubMed

    Sakaino, Hidetomo

    2016-09-01

    Video extrapolation/prediction methods are often used to synthesize new videos from images. For fluid-like images and dynamic textures as well as moving rigid objects, most state-of-the-art video extrapolation methods use non-physics-based models that learn orthogonal bases from a number of images but at high computation cost. Unfortunately, data truncation can cause image degradation, i.e., blur, artifact, and insufficient motion changes. To extrapolate videos that more strictly follow physical rules, this paper proposes a physics-based method that needs only a few images and is truncation-free. We utilize physics-based equations with image intensity and velocity: optical flow, Navier-Stokes, continuity, and advection equations. These allow us to use partial difference equations to deal with the local image feature changes. Image degradation during extrapolation is minimized by updating model parameters, where a novel time-varying energy balancer model that uses energy based image features, i.e., texture, velocity, and edge. Moreover, the advection equation is discretized by high-order constrained interpolation profile for lower quantization error than can be achieved by the previous finite difference method in long-term videos. Experiments show that the proposed energy based video extrapolation method outperforms the state-of-the-art video extrapolation methods in terms of image quality and computation cost.

  7. Application of Laser Imaging for Bio/geophysical Studies

    NASA Technical Reports Server (NTRS)

    Hummel, J. R.; Goltz, S. M.; Depiero, N. L.; Degloria, D. P.; Pagliughi, F. M.

    1992-01-01

    SPARTA, Inc. has developed a low-cost, portable laser imager that, among other applications, can be used in bio/geophysical applications. In the application to be discussed here, the system was utilized as an imaging system for background features in a forested locale. The SPARTA mini-ladar system was used at the International Paper Northern Experimental Forest near Howland, Maine to assist in a project designed to study the thermal and radiometric phenomenology at forest edges. The imager was used to obtain data from three complex sites, a 'seed' orchard, a forest edge, and a building. The goal of the study was to demonstrate the usefulness of the laser imager as a tool to obtain geometric and internal structure data about complex 3-D objects in a natural background. The data from these images have been analyzed to obtain information about the distributions of the objects in a scene. A range detection algorithm has been used to identify individual objects in a laser image and an edge detection algorithm then applied to highlight the outlines of discrete objects. An example of an image processed in such a manner is shown. Described here are the results from the study. In addition, results are presented outlining how the laser imaging system could be used to obtain other important information about bio/geophysical systems, such as the distribution of woody material in forests.

  8. Parallel detecting super-resolution microscopy using correlation based image restoration

    NASA Astrophysics Data System (ADS)

    Yu, Zhongzhi; Liu, Shaocong; Zhu, Dazhao; Kuang, Cuifang; Liu, Xu

    2017-12-01

    A novel approach to achieve the image restoration is proposed in which each detector's relative position in the detector array is no longer a necessity. We can identify each detector's relative location by extracting a certain area from one of the detector's image and scanning it on other detectors' images. According to this location, we can generate the point spread functions (PSF) for each detector and perform deconvolution for image restoration. Equipped with this method, the microscope with discretionally designed detector array can be easily constructed without the concern of exact relative locations of detectors. The simulated results and experimental results show the total improvement in resolution with a factor of 1.7 compared to conventional confocal fluorescence microscopy. With the significant enhancement in resolution and easiness for application of this method, this novel method should have potential for a wide range of application in fluorescence microscopy based on parallel detecting.

  9. Characterization of the Distance Relationship Between Localized Serotonin Receptors and Glia Cells on Fluorescence Microscopy Images of Brain Tissue.

    PubMed

    Jacak, Jaroslaw; Schaller, Susanne; Borgmann, Daniela; Winkler, Stephan M

    2015-08-01

    We here present two new methods for the characterization of fluorescent localization microscopy images obtained from immunostained brain tissue sections. Direct stochastic optical reconstruction microscopy images of 5-HT1A serotonin receptors and glial fibrillary acidic proteins in healthy cryopreserved brain tissues are analyzed. In detail, we here present two image processing methods for characterizing differences in receptor distribution on glial cells and their distribution on neural cells: One variant relies on skeleton extraction and adaptive thresholding, the other on k-means based discrete layer segmentation. Experimental results show that both methods can be applied for distinguishing classes of images with respect to serotonin receptor distribution. Quantification of nanoscopic changes in relative protein expression on particular cell types can be used to analyze degeneration in tissues caused by diseases or medical treatment.

  10. TV-based conjugate gradient method and discrete L-curve for few-view CT reconstruction of X-ray in vivo data.

    PubMed

    Yang, Xiaoli; Hofmann, Ralf; Dapp, Robin; van de Kamp, Thomas; dos Santos Rolo, Tomy; Xiao, Xianghui; Moosmann, Julian; Kashef, Jubin; Stotzka, Rainer

    2015-03-09

    High-resolution, three-dimensional (3D) imaging of soft tissues requires the solution of two inverse problems: phase retrieval and the reconstruction of the 3D image from a tomographic stack of two-dimensional (2D) projections. The number of projections per stack should be small to accommodate fast tomography of rapid processes and to constrain X-ray radiation dose to optimal levels to either increase the duration of in vivo time-lapse series at a given goal for spatial resolution and/or the conservation of structure under X-ray irradiation. In pursuing the 3D reconstruction problem in the sense of compressive sampling theory, we propose to reduce the number of projections by applying an advanced algebraic technique subject to the minimisation of the total variation (TV) in the reconstructed slice. This problem is formulated in a Lagrangian multiplier fashion with the parameter value determined by appealing to a discrete L-curve in conjunction with a conjugate gradient method. The usefulness of this reconstruction modality is demonstrated for simulated and in vivo data, the latter acquired in parallel-beam imaging experiments using synchrotron radiation.

  11. TV-based conjugate gradient method and discrete L-curve for few-view CT reconstruction of X-ray in vivo data

    DOE PAGES

    Yang, Xiaoli; Hofmann, Ralf; Dapp, Robin; ...

    2015-01-01

    High-resolution, three-dimensional (3D) imaging of soft tissues requires the solution of two inverse problems: phase retrieval and the reconstruction of the 3D image from a tomographic stack of two-dimensional (2D) projections. The number of projections per stack should be small to accommodate fast tomography of rapid processes and to constrain X-ray radiation dose to optimal levels to either increase the duration o f in vivo time-lapse series at a given goal for spatial resolution and/or the conservation of structure under X-ray irradiation. In pursuing the 3D reconstruction problem in the sense of compressive sampling theory, we propose to reduce themore » number of projections by applying an advanced algebraic technique subject to the minimisation of the total variation (TV) in the reconstructed slice. This problem is formulated in a Lagrangian multiplier fashion with the parameter value determined by appealing to a discrete L-curve in conjunction with a conjugate gradient method. The usefulness of this reconstruction modality is demonstrated for simulated and in vivo data, the latter acquired in parallel-beam imaging experiments using synchrotron radiation.« less

  12. Pattern-Based Inverse Modeling for Characterization of Subsurface Flow Models with Complex Geologic Heterogeneity

    NASA Astrophysics Data System (ADS)

    Golmohammadi, A.; Jafarpour, B.; M Khaninezhad, M. R.

    2017-12-01

    Calibration of heterogeneous subsurface flow models leads to ill-posed nonlinear inverse problems, where too many unknown parameters are estimated from limited response measurements. When the underlying parameters form complex (non-Gaussian) structured spatial connectivity patterns, classical variogram-based geostatistical techniques cannot describe the underlying connectivity patterns. Modern pattern-based geostatistical methods that incorporate higher-order spatial statistics are more suitable for describing such complex spatial patterns. Moreover, when the underlying unknown parameters are discrete (geologic facies distribution), conventional model calibration techniques that are designed for continuous parameters cannot be applied directly. In this paper, we introduce a novel pattern-based model calibration method to reconstruct discrete and spatially complex facies distributions from dynamic flow response data. To reproduce complex connectivity patterns during model calibration, we impose a feasibility constraint to ensure that the solution follows the expected higher-order spatial statistics. For model calibration, we adopt a regularized least-squares formulation, involving data mismatch, pattern connectivity, and feasibility constraint terms. Using an alternating directions optimization algorithm, the regularized objective function is divided into a continuous model calibration problem, followed by mapping the solution onto the feasible set. The feasibility constraint to honor the expected spatial statistics is implemented using a supervised machine learning algorithm. The two steps of the model calibration formulation are repeated until the convergence criterion is met. Several numerical examples are used to evaluate the performance of the developed method.

  13. Two-Layer Fragile Watermarking Method Secured with Chaotic Map for Authentication of Digital Holy Quran

    PubMed Central

    Khalil, Mohammed S.; Khan, Muhammad Khurram; Alginahi, Yasser M.

    2014-01-01

    This paper presents a novel watermarking method to facilitate the authentication and detection of the image forgery on the Quran images. Two layers of embedding scheme on wavelet and spatial domain are introduced to enhance the sensitivity of fragile watermarking and defend the attacks. Discrete wavelet transforms are applied to decompose the host image into wavelet prior to embedding the watermark in the wavelet domain. The watermarked wavelet coefficient is inverted back to spatial domain then the least significant bits is utilized to hide another watermark. A chaotic map is utilized to blur the watermark to make it secure against the local attack. The proposed method allows high watermark payloads, while preserving good image quality. Experiment results confirm that the proposed methods are fragile and have superior tampering detection even though the tampered area is very small. PMID:25028681

  14. Two-layer fragile watermarking method secured with chaotic map for authentication of digital Holy Quran.

    PubMed

    Khalil, Mohammed S; Kurniawan, Fajri; Khan, Muhammad Khurram; Alginahi, Yasser M

    2014-01-01

    This paper presents a novel watermarking method to facilitate the authentication and detection of the image forgery on the Quran images. Two layers of embedding scheme on wavelet and spatial domain are introduced to enhance the sensitivity of fragile watermarking and defend the attacks. Discrete wavelet transforms are applied to decompose the host image into wavelet prior to embedding the watermark in the wavelet domain. The watermarked wavelet coefficient is inverted back to spatial domain then the least significant bits is utilized to hide another watermark. A chaotic map is utilized to blur the watermark to make it secure against the local attack. The proposed method allows high watermark payloads, while preserving good image quality. Experiment results confirm that the proposed methods are fragile and have superior tampering detection even though the tampered area is very small.

  15. Compressive sensing sectional imaging for single-shot in-line self-interference incoherent holography

    NASA Astrophysics Data System (ADS)

    Weng, Jiawen; Clark, David C.; Kim, Myung K.

    2016-05-01

    A numerical reconstruction method based on compressive sensing (CS) for self-interference incoherent digital holography (SIDH) is proposed to achieve sectional imaging by single-shot in-line self-interference incoherent hologram. The sensing operator is built up based on the physical mechanism of SIDH according to CS theory, and a recovery algorithm is employed for image restoration. Numerical simulation and experimental studies employing LEDs as discrete point-sources and resolution targets as extended sources are performed to demonstrate the feasibility and validity of the method. The intensity distribution and the axial resolution along the propagation direction of SIDH by angular spectrum method (ASM) and by CS are discussed. The analysis result shows that compared to ASM the reconstruction by CS can improve the axial resolution of SIDH, and achieve sectional imaging. The proposed method may be useful to 3D analysis of dynamic systems.

  16. A new splitting scheme to the discrete Boltzmann equation for non-ideal gases on non-uniform meshes

    NASA Astrophysics Data System (ADS)

    Patel, Saumil; Lee, Taehun

    2016-12-01

    We present a novel numerical procedure for solving the discrete Boltzmann equations (DBE) on non-uniform meshes. Our scheme is based on the Strang splitting method where we seek to investigate two-phase flow applications. In this note, we investigate the onset of parasitic currents which arise in many computational two-phase algorithms. To the best of our knowledge, the results presented in this work show, for the first time, a spectral element discontinuous Galerkin (SEDG) discretization of a discrete Boltzmann equation which successfully eliminates parasitic currents on non-uniform meshes. With the hope that this technique can be used for applications in complex geometries, calculations are performed on non-uniform mesh distributions by using high-order (spectral), body-fitting quadrilateral elements. Validation and verification of our work is carried out by comparing results against the classical 2D Young-Laplace law problem for a static drop.

  17. Bound states and interactions of vortex solitons in the discrete Ginzburg-Landau equation

    NASA Astrophysics Data System (ADS)

    Mejía-Cortés, C.; Soto-Crespo, J. M.; Vicencio, Rodrigo A.; Molina, Mario I.

    2012-08-01

    By using different continuation methods, we unveil a wide region in the parameter space of the discrete cubic-quintic complex Ginzburg-Landau equation, where several families of stable vortex solitons coexist. All these stationary solutions have a symmetric amplitude profile and two different topological charges. We also observe the dynamical formation of a variety of “bound-state” solutions composed of two or more of these vortex solitons. All of these stable composite structures persist in the conservative cubic limit for high values of their power content.

  18. On the existence of mosaic-skeleton approximations for discrete analogues of integral operators

    NASA Astrophysics Data System (ADS)

    Kashirin, A. A.; Taltykina, M. Yu.

    2017-09-01

    Exterior three-dimensional Dirichlet problems for the Laplace and Helmholtz equations are considered. By applying methods of potential theory, they are reduced to equivalent Fredholm boundary integral equations of the first kind, for which discrete analogues, i.e., systems of linear algebraic equations (SLAEs) are constructed. The existence of mosaic-skeleton approximations for the matrices of the indicated systems is proved. These approximations make it possible to reduce the computational complexity of an iterative solution of the SLAEs. Numerical experiments estimating the capabilities of the proposed approach are described.

  19. xTract: software for characterizing conformational changes of protein complexes by quantitative cross-linking mass spectrometry.

    PubMed

    Walzthoeni, Thomas; Joachimiak, Lukasz A; Rosenberger, George; Röst, Hannes L; Malmström, Lars; Leitner, Alexander; Frydman, Judith; Aebersold, Ruedi

    2015-12-01

    Chemical cross-linking in combination with mass spectrometry generates distance restraints of amino acid pairs in close proximity on the surface of native proteins and protein complexes. In this study we used quantitative mass spectrometry and chemical cross-linking to quantify differences in cross-linked peptides obtained from complexes in spatially discrete states. We describe a generic computational pipeline for quantitative cross-linking mass spectrometry consisting of modules for quantitative data extraction and statistical assessment of the obtained results. We used the method to detect conformational changes in two model systems: firefly luciferase and the bovine TRiC complex. Our method discovers and explains the structural heterogeneity of protein complexes using only sparse structural information.

  20. Phase synchronization based on a Dual-Tree Complex Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Ferreira, Maria Teodora; Domingues, Margarete Oliveira; Macau, Elbert E. N.

    2016-11-01

    In this work, we show the applicability of our Discrete Complex Wavelet Approach (DCWA) to verify the phenomenon of phase synchronization transition in two coupled chaotic Lorenz systems. DCWA is based on the phase assignment from complex wavelet coefficients obtained by using a Dual-Tree Complex Wavelet Transform (DT-CWT). We analyzed two coupled chaotic Lorenz systems, aiming to detect the transition from non-phase synchronization to phase synchronization. In addition, we check how good is the method in detecting periods of 2π phase-slips. In all experiments, DCWA is compared with classical phase detection methods such as the ones based on arctangent and Hilbert transform showing a much better performance.

  1. Orientation diffusions.

    PubMed

    Perona, P

    1998-01-01

    Diffusions are useful for image processing and computer vision because they provide a convenient way of smoothing noisy data, analyzing images at multiple scales, and enhancing discontinuities. A number of diffusions of image brightness have been defined and studied so far; they may be applied to scalar and vector-valued quantities that are naturally associated with intervals of either the real line, or other flat manifolds. Some quantities of interest in computer vision, and other areas of engineering that deal with images, are defined on curved manifolds;typical examples are orientation and hue that are defined on the circle. Generalizing brightness diffusions to orientation is not straightforward, especially in the case where a discrete implementation is sought. An example of what may go wrong is presented.A method is proposed to define diffusions of orientation-like quantities. First a definition in the continuum is discussed, then a discrete orientation diffusion is proposed. The behavior of such diffusions is explored both analytically and experimentally. It is shown how such orientation diffusions contain a nonlinearity that is reminiscent of edge-process and anisotropic diffusion. A number of open questions are proposed at the end.

  2. An Equivalent Fracture Modeling Method

    NASA Astrophysics Data System (ADS)

    Li, Shaohua; Zhang, Shujuan; Yu, Gaoming; Xu, Aiyun

    2017-12-01

    3D fracture network model is built based on discrete fracture surfaces, which are simulated based on fracture length, dip, aperture, height and so on. The interesting area of Wumishan Formation of Renqiu buried hill reservoir is about 57 square kilometer and the thickness of target strata is more than 2000 meters. In addition with great fracture density, the fracture simulation and upscaling of discrete fracture network model of Wumishan Formation are very intense computing. In order to solve this problem, a method of equivalent fracture modeling is proposed. First of all, taking the fracture interpretation data obtained from imaging logging and conventional logging as the basic data, establish the reservoir level model, and then under the constraint of reservoir level model, take fault distance analysis model as the second variable, establish fracture density model by Sequential Gaussian Simulation method. Increasing the width, height and length of fracture, at the same time decreasing its density in order to keep the similar porosity and permeability after upscaling discrete fracture network model. In this way, the fracture model of whole interesting area can be built within an accepted time.

  3. Estimated spectrum adaptive postfilter and the iterative prepost filtering algirighms

    NASA Technical Reports Server (NTRS)

    Linares, Irving (Inventor)

    2004-01-01

    The invention presents The Estimated Spectrum Adaptive Postfilter (ESAP) and the Iterative Prepost Filter (IPF) algorithms. These algorithms model a number of image-adaptive post-filtering and pre-post filtering methods. They are designed to minimize Discrete Cosine Transform (DCT) blocking distortion caused when images are highly compressed with the Joint Photographic Expert Group (JPEG) standard. The ESAP and the IPF techniques of the present invention minimize the mean square error (MSE) to improve the objective and subjective quality of low-bit-rate JPEG gray-scale images while simultaneously enhancing perceptual visual quality with respect to baseline JPEG images.

  4. Radiative Transfer Modeling of a Large Pool Fire by Discrete Ordinates, Discrete Transfer, Ray Tracing, Monte Carlo and Moment Methods

    NASA Technical Reports Server (NTRS)

    Jensen, K. A.; Ripoll, J.-F.; Wray, A. A.; Joseph, D.; ElHafi, M.

    2004-01-01

    Five computational methods for solution of the radiative transfer equation in an absorbing-emitting and non-scattering gray medium were compared on a 2 m JP-8 pool fire. The temperature and absorption coefficient fields were taken from a synthetic fire due to the lack of a complete set of experimental data for fires of this size. These quantities were generated by a code that has been shown to agree well with the limited quantity of relevant data in the literature. Reference solutions to the governing equation were determined using the Monte Carlo method and a ray tracing scheme with high angular resolution. Solutions using the discrete transfer method, the discrete ordinate method (DOM) with both S(sub 4) and LC(sub 11) quadratures, and moment model using the M(sub 1) closure were compared to the reference solutions in both isotropic and anisotropic regions of the computational domain. DOM LC(sub 11) is shown to be the more accurate than the commonly used S(sub 4) quadrature technique, especially in anisotropic regions of the fire domain. This represents the first study where the M(sub 1) method was applied to a combustion problem occurring in a complex three-dimensional geometry. The M(sub 1) results agree well with other solution techniques, which is encouraging for future applications to similar problems since it is computationally the least expensive solution technique. Moreover, M(sub 1) results are comparable to DOM S(sub 4).

  5. Robust image watermarking using DWT and SVD for copyright protection

    NASA Astrophysics Data System (ADS)

    Harjito, Bambang; Suryani, Esti

    2017-02-01

    The Objective of this paper is proposed a robust combined Discrete Wavelet Transform (DWT) and Singular Value Decomposition (SVD). The RGB image is called a cover medium, and watermark image is converted into gray scale. Then, they are transformed using DWT so that they can be split into several subbands, namely sub-band LL2, LH2, HL2. The watermark image embeds into the cover medium on sub-band LL2. This scheme aims to obtain the higher robustness level than the previous method which performs of SVD matrix factorization image for copyright protection. The experiment results show that the proposed method has robustness against several image processing attacks such as Gaussian, Poisson and Salt and Pepper Noise. In these attacks, noise has average Normalized Correlation (NC) values of 0.574863 0.889784, 0.889782 respectively. The watermark image can be detected and extracted.

  6. Shift-invariant discrete wavelet transform analysis for retinal image classification.

    PubMed

    Khademi, April; Krishnan, Sridhar

    2007-12-01

    This work involves retinal image classification and a novel analysis system was developed. From the compressed domain, the proposed scheme extracts textural features from wavelet coefficients, which describe the relative homogeneity of localized areas of the retinal images. Since the discrete wavelet transform (DWT) is shift-variant, a shift-invariant DWT was explored to ensure that a robust feature set was extracted. To combat the small database size, linear discriminant analysis classification was used with the leave one out method. 38 normal and 48 abnormal (exudates, large drusens, fine drusens, choroidal neovascularization, central vein and artery occlusion, histoplasmosis, arteriosclerotic retinopathy, hemi-central retinal vein occlusion and more) were used and a specificity of 79% and sensitivity of 85.4% were achieved (the average classification rate is 82.2%). The success of the system can be accounted to the highly robust feature set which included translation, scale and semi-rotational, features. Additionally, this technique is database independent since the features were specifically tuned to the pathologies of the human eye.

  7. Fast Segmentation From Blurred Data in 3D Fluorescence Microscopy.

    PubMed

    Storath, Martin; Rickert, Dennis; Unser, Michael; Weinmann, Andreas

    2017-10-01

    We develop a fast algorithm for segmenting 3D images from linear measurements based on the Potts model (or piecewise constant Mumford-Shah model). To that end, we first derive suitable space discretizations of the 3D Potts model, which are capable of dealing with 3D images defined on non-cubic grids. Our discretization allows us to utilize a specific splitting approach, which results in decoupled subproblems of moderate size. The crucial point in the 3D setup is that the number of independent subproblems is so large that we can reasonably exploit the parallel processing capabilities of the graphics processing units (GPUs). Our GPU implementation is up to 18 times faster than the sequential CPU version. This allows to process even large volumes in acceptable runtimes. As a further contribution, we extend the algorithm in order to deal with non-negativity constraints. We demonstrate the efficiency of our method for combined image deconvolution and segmentation on simulated data and on real 3D wide field fluorescence microscopy data.

  8. Automated discrete electron tomography - Towards routine high-fidelity reconstruction of nanomaterials.

    PubMed

    Zhuge, Xiaodong; Jinnai, Hiroshi; Dunin-Borkowski, Rafal E; Migunov, Vadim; Bals, Sara; Cool, Pegie; Bons, Anton-Jan; Batenburg, Kees Joost

    2017-04-01

    Electron tomography is an essential imaging technique for the investigation of morphology and 3D structure of nanomaterials. This method, however, suffers from well-known missing wedge artifacts due to a restricted tilt range, which limits the objectiveness, repeatability and efficiency of quantitative structural analysis. Discrete tomography represents one of the promising reconstruction techniques for materials science, potentially capable of delivering higher fidelity reconstructions by exploiting the prior knowledge of the limited number of material compositions in a specimen. However, the application of discrete tomography to practical datasets remains a difficult task due to the underlying challenging mathematical problem. In practice, it is often hard to obtain consistent reconstructions from experimental datasets. In addition, numerous parameters need to be tuned manually, which can lead to bias and non-repeatability. In this paper, we present the application of a new iterative reconstruction technique, named TVR-DART, for discrete electron tomography. The technique is capable of consistently delivering reconstructions with significantly reduced missing wedge artifacts for a variety of challenging data and imaging conditions, and can automatically estimate its key parameters. We describe the principles of the technique and apply it to datasets from three different types of samples acquired under diverse imaging modes. By further reducing the available tilt range and number of projections, we show that the proposed technique can still produce consistent reconstructions with minimized missing wedge artifacts. This new development promises to provide the electron microscopy community with an easy-to-use and robust tool for high-fidelity 3D characterization of nanomaterials. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. A partial differential equation-based general framework adapted to Rayleigh's, Rician's and Gaussian's distributed noise for restoration and enhancement of magnetic resonance image.

    PubMed

    Yadav, Ram Bharos; Srivastava, Subodh; Srivastava, Rajeev

    2016-01-01

    The proposed framework is obtained by casting the noise removal problem into a variational framework. This framework automatically identifies the various types of noise present in the magnetic resonance image and filters them by choosing an appropriate filter. This filter includes two terms: the first term is a data likelihood term and the second term is a prior function. The first term is obtained by minimizing the negative log likelihood of the corresponding probability density functions: Gaussian or Rayleigh or Rician. Further, due to the ill-posedness of the likelihood term, a prior function is needed. This paper examines three partial differential equation based priors which include total variation based prior, anisotropic diffusion based prior, and a complex diffusion (CD) based prior. A regularization parameter is used to balance the trade-off between data fidelity term and prior. The finite difference scheme is used for discretization of the proposed method. The performance analysis and comparative study of the proposed method with other standard methods is presented for brain web dataset at varying noise levels in terms of peak signal-to-noise ratio, mean square error, structure similarity index map, and correlation parameter. From the simulation results, it is observed that the proposed framework with CD based prior is performing better in comparison to other priors in consideration.

  10. Semiautomated hybrid algorithm for estimation of three-dimensional liver surface in CT using dynamic cellular automata and level-sets.

    PubMed

    Dakua, Sarada Prasad; Abinahed, Julien; Al-Ansari, Abdulla

    2015-04-01

    Liver segmentation continues to remain a major challenge, largely due to its intense complexity with surrounding anatomical structures (stomach, kidney, and heart), high noise level and lack of contrast in pathological computed tomography (CT) data. We present an approach to reconstructing the liver surface in low contrast CT. The main contributions are: (1) a stochastic resonance-based methodology in discrete cosine transform domain is developed to enhance the contrast of pathological liver images, (2) a new formulation is proposed to prevent the object boundary, resulting from the cellular automata method, from leaking into the surrounding areas of similar intensity, and (3) a level-set method is suggested to generate intermediate segmentation contours from two segmented slices distantly located in a subject sequence. We have tested the algorithm on real datasets obtained from two sources, Hamad General Hospital and medical image computing and computer-assisted interventions grand challenge workshop. Various parameters in the algorithm, such as [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text], play imperative roles, thus their values are precisely selected. Both qualitative and quantitative evaluation performed on liver data show promising segmentation accuracy when compared with ground truth data reflecting the potential of the proposed method.

  11. Numerical simulation of deformation and failure processes of a complex technical object under impact loading

    NASA Astrophysics Data System (ADS)

    Kraus, E. I.; Shabalin, I. I.; Shabalin, T. I.

    2018-04-01

    The main points of development of numerical tools for simulation of deformation and failure of complex technical objects under nonstationary conditions of extreme loading are presented. The possibility of extending the dynamic method for construction of difference grids to the 3D case is shown. A 3D realization of discrete-continuum approach to the deformation and failure of complex technical objects is carried out. The efficiency of the existing software package for 3D modelling is shown.

  12. Stokes phenomena in discrete Painlevé II.

    PubMed

    Joshi, N; Lustri, C J; Luu, S

    2017-02-01

    We consider the asymptotic behaviour of the second discrete Painlevé equation in the limit as the independent variable becomes large. Using asymptotic power series, we find solutions that are asymptotically pole-free within some region of the complex plane. These asymptotic solutions exhibit Stokes phenomena, which is typically invisible to classical power series methods. We subsequently apply exponential asymptotic techniques to investigate such phenomena, and obtain mathematical descriptions of the rapid switching behaviour associated with Stokes curves. Through this analysis, we determine the regions of the complex plane in which the asymptotic behaviour is described by a power series expression, and find that the behaviour of these asymptotic solutions shares a number of features with the tronquée and tri-tronquée solutions of the second continuous Painlevé equation.

  13. Stokes phenomena in discrete Painlevé II

    PubMed Central

    Joshi, N.

    2017-01-01

    We consider the asymptotic behaviour of the second discrete Painlevé equation in the limit as the independent variable becomes large. Using asymptotic power series, we find solutions that are asymptotically pole-free within some region of the complex plane. These asymptotic solutions exhibit Stokes phenomena, which is typically invisible to classical power series methods. We subsequently apply exponential asymptotic techniques to investigate such phenomena, and obtain mathematical descriptions of the rapid switching behaviour associated with Stokes curves. Through this analysis, we determine the regions of the complex plane in which the asymptotic behaviour is described by a power series expression, and find that the behaviour of these asymptotic solutions shares a number of features with the tronquée and tri-tronquée solutions of the second continuous Painlevé equation. PMID:28293132

  14. Discrete Fourier Transform Analysis in a Complex Vector Space

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H.

    2009-01-01

    Alternative computational strategies for the Discrete Fourier Transform (DFT) have been developed using analysis of geometric manifolds. This approach provides a general framework for performing DFT calculations, and suggests a more efficient implementation of the DFT for applications using iterative transform methods, particularly phase retrieval. The DFT can thus be implemented using fewer operations when compared to the usual DFT counterpart. The software decreases the run time of the DFT in certain applications such as phase retrieval that iteratively call the DFT function. The algorithm exploits a special computational approach based on analysis of the DFT as a transformation in a complex vector space. As such, this approach has the potential to realize a DFT computation that approaches N operations versus Nlog(N) operations for the equivalent Fast Fourier Transform (FFT) calculation.

  15. High-resolution method for evolving complex interface networks

    NASA Astrophysics Data System (ADS)

    Pan, Shucheng; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2018-04-01

    In this paper we describe a high-resolution transport formulation of the regional level-set approach for an improved prediction of the evolution of complex interface networks. The novelty of this method is twofold: (i) construction of local level sets and reconstruction of a global level set, (ii) local transport of the interface network by employing high-order spatial discretization schemes for improved representation of complex topologies. Various numerical test cases of multi-region flow problems, including triple-point advection, single vortex flow, mean curvature flow, normal driven flow, dry foam dynamics and shock-bubble interaction show that the method is accurate and suitable for a wide range of complex interface-network evolutions. Its overall computational cost is comparable to the Semi-Lagrangian regional level-set method while the prediction accuracy is significantly improved. The approach thus offers a viable alternative to previous interface-network level-set method.

  16. Nonlinear research of an image motion stabilization system embedded in a space land-survey telescope

    NASA Astrophysics Data System (ADS)

    Somov, Yevgeny; Butyrin, Sergey; Siguerdidjane, Houria

    2017-01-01

    We consider an image motion stabilization system embedded into a space telescope for a scanning optoelectronic observation of terrestrial targets. Developed model of this system is presented taking into account physical hysteresis of piezo-ceramic driver and a time delay at a forming of digital control. We have presented elaborated algorithms for discrete filtering and digital control, obtained results on analysis of the image motion velocity oscillations in the telescope focal plane, and also methods for terrestrial and in-flight verification of the system.

  17. An efficient approach for the assembly of mass and stiffness matrices of structures with modifications

    NASA Astrophysics Data System (ADS)

    Wagner, Andreas; Spelsberg-Korspeter, Gottfried

    2013-09-01

    The finite element method is one of the most common tools for the comprehensive analysis of structures with applications reaching from static, often nonlinear stress-strain, to transient dynamic analyses. For single calculations the expense to generate an appropriate mesh is often insignificant compared to the analysis time even for complex geometries and therefore negligible. However, this is not the case for certain other applications, most notably structural optimization procedures, where the (re-)meshing effort is very important with respect to the total runtime of the procedure. Thus it is desirable to find methods to efficiently generate mass and stiffness matrices allowing to reduce this effort, especially for structures with modifications of minor complexity, e.g. panels with cutouts. Therefore, a modeling approach referred to as Energy Modification Method is proposed in this paper. The underlying idea is to model and discretize the basis structure, e.g. a plate, and the modifications, e.g. holes, separately. The discretized energy expressions of the modifications are then subtracted from (or added to) the energy expressions of the basis structure and the coordinates are related to each other by kinematical constraints leading to the mass and stiffness matrices of the complete structure. This approach will be demonstrated by two simple examples, a rod with varying material properties and a rectangular plate with a rectangular or circular hole, using a finite element discretization as basis. Convergence studies of the method based on the latter example follow demonstrating the rapid convergence and efficiency of the method. Finally, the Energy Modification Method is successfully used in the structural optimization of a circular plate with holes, with the objective to split all its double eigenfrequencies.

  18. Hydraulic tomography of discrete networks of conduits and fractures in a karstic aquifer by using a deterministic inversion algorithm

    NASA Astrophysics Data System (ADS)

    Fischer, P.; Jardani, A.; Lecoq, N.

    2018-02-01

    In this paper, we present a novel inverse modeling method called Discrete Network Deterministic Inversion (DNDI) for mapping the geometry and property of the discrete network of conduits and fractures in the karstified aquifers. The DNDI algorithm is based on a coupled discrete-continuum concept to simulate numerically water flows in a model and a deterministic optimization algorithm to invert a set of observed piezometric data recorded during multiple pumping tests. In this method, the model is partioned in subspaces piloted by a set of parameters (matrix transmissivity, and geometry and equivalent transmissivity of the conduits) that are considered as unknown. In this way, the deterministic optimization process can iteratively correct the geometry of the network and the values of the properties, until it converges to a global network geometry in a solution model able to reproduce the set of data. An uncertainty analysis of this result can be performed from the maps of posterior uncertainties on the network geometry or on the property values. This method has been successfully tested for three different theoretical and simplified study cases with hydraulic responses data generated from hypothetical karstic models with an increasing complexity of the network geometry, and of the matrix heterogeneity.

  19. Structure-preserving interpolation of temporal and spatial image sequences using an optical flow-based method.

    PubMed

    Ehrhardt, J; Säring, D; Handels, H

    2007-01-01

    Modern tomographic imaging devices enable the acquisition of spatial and temporal image sequences. But, the spatial and temporal resolution of such devices is limited and therefore image interpolation techniques are needed to represent images at a desired level of discretization. This paper presents a method for structure-preserving interpolation between neighboring slices in temporal or spatial image sequences. In a first step, the spatiotemporal velocity field between image slices is determined using an optical flow-based registration method in order to establish spatial correspondence between adjacent slices. An iterative algorithm is applied using the spatial and temporal image derivatives and a spatiotemporal smoothing step. Afterwards, the calculated velocity field is used to generate an interpolated image at the desired time by averaging intensities between corresponding points. Three quantitative measures are defined to evaluate the performance of the interpolation method. The behavior and capability of the algorithm is demonstrated by synthetic images. A population of 17 temporal and spatial image sequences are utilized to compare the optical flow-based interpolation method to linear and shape-based interpolation. The quantitative results show that the optical flow-based method outperforms the linear and shape-based interpolation statistically significantly. The interpolation method presented is able to generate image sequences with appropriate spatial or temporal resolution needed for image comparison, analysis or visualization tasks. Quantitative and qualitative measures extracted from synthetic phantoms and medical image data show that the new method definitely has advantages over linear and shape-based interpolation.

  20. Recent Advances in Compressed Sensing: Discrete Uncertainty Principles and Fast Hyperspectral Imaging

    DTIC Science & Technology

    2015-03-26

    Fourier Analysis and Applications, vol. 14, pp. 838–858, 2008. 11. D. J. Cooke, “A discrete X - ray transform for chromotomographic hyperspectral imaging ... medical imaging , e.g., magnetic resonance imaging (MRI). Since the early 1980s, MRI has granted doctors the ability to distinguish between healthy tissue...i.e., at most K entries of x are nonzero. In many settings, this is a valid signal model; for example, JPEG2000 exploits the fact that natural images

  1. The cost of conservative synchronization in parallel discrete event simulations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1990-01-01

    The performance of a synchronous conservative parallel discrete-event simulation protocol is analyzed. The class of simulation models considered is oriented around a physical domain and possesses a limited ability to predict future behavior. A stochastic model is used to show that as the volume of simulation activity in the model increases relative to a fixed architecture, the complexity of the average per-event overhead due to synchronization, event list manipulation, lookahead calculations, and processor idle time approach the complexity of the average per-event overhead of a serial simulation. The method is therefore within a constant factor of optimal. The analysis demonstrates that on large problems--those for which parallel processing is ideally suited--there is often enough parallel workload so that processors are not usually idle. The viability of the method is also demonstrated empirically, showing how good performance is achieved on large problems using a thirty-two node Intel iPSC/2 distributed memory multiprocessor.

  2. Targeting Photoinduced DNA Destruction by Ru(II) Tetraazaphenanthrene in Live Cells by Signal Peptide.

    PubMed

    Burke, Christopher S; Byrne, Aisling; Keyes, Tia E

    2018-06-06

    Exploiting NF-κB transcription factor peptide conjugation, a Ru(II)-bis-tap complex (tap = 1,4,5,8-tetraazaphenanthrene) was targeted specifically to the nuclei of live HeLa and CHO cells for the first time. DNA binding of the complex  within the nucleus of live cells was evident from gradual extinction of the metal complex luminescence after it had crossed the nuclear envelope, attributed to guanine quenching of the ruthenium emission via photoinduced electron transfer. Resonance Raman imaging confirmed that the complex remained in the nucleus after emission is extinguished. In the dark and under imaging conditions the cells remain viable, but efficient cellular destruction was induced with precise spatiotemporal control by applying higher irradiation intensities to selected cells. Solution studies indicate that the peptide conjugated complex associates strongly with calf thymus DNA ex-cellulo and gel electrophoresis confirmed that the peptide conjugate is capable of singlet oxygen independent photodamage to plasmid DNA. This indicates that the observed efficient cellular destruction likely operates via direct DNA oxidation by photoinduced electron transfer between guanine and the precision targeted Ru(II)-tap probe. The discrete targeting of polyazaaromatic complexes to the cell nucleus and confirmation that they are photocytotoxic after nuclear delivery is an important step toward their application in cellular phototherapy.

  3. Modified Discrete Grey Wolf Optimizer Algorithm for Multilevel Image Thresholding

    PubMed Central

    Sun, Lijuan; Guo, Jian; Xu, Bin; Li, Shujing

    2017-01-01

    The computation of image segmentation has become more complicated with the increasing number of thresholds, and the option and application of the thresholds in image thresholding fields have become an NP problem at the same time. The paper puts forward the modified discrete grey wolf optimizer algorithm (MDGWO), which improves on the optimal solution updating mechanism of the search agent by the weights. Taking Kapur's entropy as the optimized function and based on the discreteness of threshold in image segmentation, the paper firstly discretizes the grey wolf optimizer (GWO) and then proposes a new attack strategy by using the weight coefficient to replace the search formula for optimal solution used in the original algorithm. The experimental results show that MDGWO can search out the optimal thresholds efficiently and precisely, which are very close to the result examined by exhaustive searches. In comparison with the electromagnetism optimization (EMO), the differential evolution (DE), the Artifical Bee Colony (ABC), and the classical GWO, it is concluded that MDGWO has advantages over the latter four in terms of image segmentation quality and objective function values and their stability. PMID:28127305

  4. On the number of eigenvalues of the discrete one-dimensional Dirac operator with a complex potential

    NASA Astrophysics Data System (ADS)

    Hulko, Artem

    2018-03-01

    In this paper we define a one-dimensional discrete Dirac operator on Z . We study the eigenvalues of the Dirac operator with a complex potential. We obtain bounds on the total number of eigenvalues in the case where V decays exponentially at infinity. We also estimate the number of eigenvalues for the discrete Schrödinger operator with complex potential on Z . That is we extend the result obtained by Hulko (Bull Math Sci, to appear) to the whole Z.

  5. An Analysis Method for Superconducting Resonator Parameter Extraction with Complex Baseline Removal

    NASA Technical Reports Server (NTRS)

    Cataldo, Giuseppe

    2014-01-01

    A new semi-empirical model is proposed for extracting the quality (Q) factors of arrays of superconducting microwave kinetic inductance detectors (MKIDs). The determination of the total internal and coupling Q factors enables the computation of the loss in the superconducting transmission lines. The method used allows the simultaneous analysis of multiple interacting discrete resonators with the presence of a complex spectral baseline arising from reflections in the system. The baseline removal allows an unbiased estimate of the device response as measured in a cryogenic instrumentation setting.

  6. On an image reconstruction method for ECT

    NASA Astrophysics Data System (ADS)

    Sasamoto, Akira; Suzuki, Takayuki; Nishimura, Yoshihiro

    2007-04-01

    An image by Eddy Current Testing(ECT) is a blurred image to original flaw shape. In order to reconstruct fine flaw image, a new image reconstruction method has been proposed. This method is based on an assumption that a very simple relationship between measured data and source were described by a convolution of response function and flaw shape. This assumption leads to a simple inverse analysis method with deconvolution.In this method, Point Spread Function (PSF) and Line Spread Function(LSF) play a key role in deconvolution processing. This study proposes a simple data processing to determine PSF and LSF from ECT data of machined hole and line flaw. In order to verify its validity, ECT data for SUS316 plate(200x200x10mm) with artificial machined hole and notch flaw had been acquired by differential coil type sensors(produced by ZETEC Inc). Those data were analyzed by the proposed method. The proposed method restored sharp discrete multiple hole image from interfered data by multiple holes. Also the estimated width of line flaw has been much improved compared with original experimental data. Although proposed inverse analysis strategy is simple and easy to implement, its validity to holes and line flaw have been shown by many results that much finer image than original image have been reconstructed.

  7. A mesoscopic bridging scale method for fluids and coupling dissipative particle dynamics with continuum finite element method

    PubMed Central

    Kojic, Milos; Filipovic, Nenad; Tsuda, Akira

    2012-01-01

    A multiscale procedure to couple a mesoscale discrete particle model and a macroscale continuum model of incompressible fluid flow is proposed in this study. We call this procedure the mesoscopic bridging scale (MBS) method since it is developed on the basis of the bridging scale method for coupling molecular dynamics and finite element models [G.J. Wagner, W.K. Liu, Coupling of atomistic and continuum simulations using a bridging scale decomposition, J. Comput. Phys. 190 (2003) 249–274]. We derive the governing equations of the MBS method and show that the differential equations of motion of the mesoscale discrete particle model and finite element (FE) model are only coupled through the force terms. Based on this coupling, we express the finite element equations which rely on the Navier–Stokes and continuity equations, in a way that the internal nodal FE forces are evaluated using viscous stresses from the mesoscale model. The dissipative particle dynamics (DPD) method for the discrete particle mesoscale model is employed. The entire fluid domain is divided into a local domain and a global domain. Fluid flow in the local domain is modeled with both DPD and FE method, while fluid flow in the global domain is modeled by the FE method only. The MBS method is suitable for modeling complex (colloidal) fluid flows, where continuum methods are sufficiently accurate only in the large fluid domain, while small, local regions of particular interest require detailed modeling by mesoscopic discrete particles. Solved examples – simple Poiseuille and driven cavity flows illustrate the applicability of the proposed MBS method. PMID:23814322

  8. The Propagation of Movement Variability in Time: A Methodological Approach for Discrete Movements with Multiple Degrees of Freedom.

    PubMed

    Krüger, Melanie; Straube, Andreas; Eggert, Thomas

    2017-01-01

    In recent years, theory-building in motor neuroscience and our understanding of the synergistic control of the redundant human motor system has significantly profited from the emergence of a range of different mathematical approaches to analyze the structure of movement variability. Approaches such as the Uncontrolled Manifold method or the Noise-Tolerance-Covariance decomposition method allow to detect and interpret changes in movement coordination due to e.g., learning, external task constraints or disease, by analyzing the structure of within-subject, inter-trial movement variability. Whereas, for cyclical movements (e.g., locomotion), mathematical approaches exist to investigate the propagation of movement variability in time (e.g., time series analysis), similar approaches are missing for discrete, goal-directed movements, such as reaching. Here, we propose canonical correlation analysis as a suitable method to analyze the propagation of within-subject variability across different time points during the execution of discrete movements. While similar analyses have already been applied for discrete movements with only one degree of freedom (DoF; e.g., Pearson's product-moment correlation), canonical correlation analysis allows to evaluate the coupling of inter-trial variability across different time points along the movement trajectory for multiple DoF-effector systems, such as the arm. The theoretical analysis is illustrated by empirical data from a study on reaching movements under normal and disturbed proprioception. The results show increased movement duration, decreased movement amplitude, as well as altered movement coordination under ischemia, which results in a reduced complexity of movement control. Movement endpoint variability is not increased under ischemia. This suggests that healthy adults are able to immediately and efficiently adjust the control of complex reaching movements to compensate for the loss of proprioceptive information. Further, it is shown that, by using canonical correlation analysis, alterations in movement coordination that indicate changes in the control strategy concerning the use of motor redundancy can be detected, which represents an important methodical advance in the context of neuromechanics.

  9. Direct Numerical Simulation of Turbulent Flow Over Complex Bathymetry

    NASA Astrophysics Data System (ADS)

    Yue, L.; Hsu, T. J.

    2017-12-01

    Direct numerical simulation (DNS) is regarded as a powerful tool in the investigation of turbulent flow featured with a wide range of time and spatial scales. With the application of coordinate transformation in a pseudo-spectral scheme, a parallelized numerical modeling system was created aiming at simulating flow over complex bathymetry with high numerical accuracy and efficiency. The transformed governing equations were integrated in time using a third-order low-storage Runge-Kutta method. For spatial discretization, the discrete Fourier expansion was adopted in the streamwise and spanwise direction, enforcing the periodic boundary condition in both directions. The Chebyshev expansion on Chebyshev-Gauss-Lobatto points was used in the wall-normal direction, assuming there is no-slip on top and bottom walls. The diffusion terms were discretized with a Crank-Nicolson scheme, while the advection terms dealiased with the 2/3 rule were discretized with an Adams-Bashforth scheme. In the prediction step, the velocity was calculated in physical domain by solving the resulting linear equation directly. However, the extra terms introduced by coordinate transformation impose a strict limitation to time step and an iteration method was applied to overcome this restriction in the correction step for pressure by solving the Helmholtz equation. The numerical solver is written in object-oriented C++ programing language utilizing Armadillo linear algebra library for matrix computation. Several benchmarking cases in laminar and turbulent flow were carried out to verify/validate the numerical model and very good agreements are achieved. Ongoing work focuses on implementing sediment transport capability for multiple sediment classes and parameterizations for flocculation processes.

  10. Adaptive compressed sensing of remote-sensing imaging based on the sparsity prediction

    NASA Astrophysics Data System (ADS)

    Yang, Senlin; Li, Xilong; Chong, Xin

    2017-10-01

    The conventional compressive sensing works based on the non-adaptive linear projections, and the parameter of its measurement times is usually set empirically. As a result, the quality of image reconstruction is always affected. Firstly, the block-based compressed sensing (BCS) with conventional selection for compressive measurements was given. Then an estimation method for the sparsity of image was proposed based on the two dimensional discrete cosine transform (2D DCT). With an energy threshold given beforehand, the DCT coefficients were processed with both energy normalization and sorting in descending order, and the sparsity of the image can be achieved by the proportion of dominant coefficients. And finally, the simulation result shows that, the method can estimate the sparsity of image effectively, and provides an active basis for the selection of compressive observation times. The result also shows that, since the selection of observation times is based on the sparse degree estimated with the energy threshold provided, the proposed method can ensure the quality of image reconstruction.

  11. Inverse scattering pre-stack depth imaging and it's comparison to some depth migration methods for imaging rich fault complex structure

    NASA Astrophysics Data System (ADS)

    Nurhandoko, Bagus Endar B.; Sukmana, Indriani; Mubarok, Syahrul; Deny, Agus; Widowati, Sri; Kurniadi, Rizal

    2012-06-01

    Migration is important issue for seismic imaging in complex structure. In this decade, depth imaging becomes important tools for producing accurate image in depth imaging instead of time domain imaging. The challenge of depth migration method, however, is in revealing the complex structure of subsurface. There are many methods of depth migration with their advantages and weaknesses. In this paper, we show our propose method of pre-stack depth migration based on time domain inverse scattering wave equation. Hopefully this method can be as solution for imaging complex structure in Indonesia, especially in rich thrusting fault zones. In this research, we develop a recent advance wave equation migration based on time domain inverse scattering wave which use more natural wave propagation using scattering wave. This wave equation pre-stack depth migration use time domain inverse scattering wave equation based on Helmholtz equation. To provide true amplitude recovery, an inverse of divergence procedure and recovering transmission loss are considered of pre-stack migration. Benchmarking the propose inverse scattering pre-stack depth migration with the other migration methods are also presented, i.e.: wave equation pre-stack depth migration, waveequation depth migration, and pre-stack time migration method. This inverse scattering pre-stack depth migration could image successfully the rich fault zone which consist extremely dip and resulting superior quality of seismic image. The image quality of inverse scattering migration is much better than the others migration methods.

  12. A novel image encryption algorithm using chaos and reversible cellular automata

    NASA Astrophysics Data System (ADS)

    Wang, Xingyuan; Luan, Dapeng

    2013-11-01

    In this paper, a novel image encryption scheme is proposed based on reversible cellular automata (RCA) combining chaos. In this algorithm, an intertwining logistic map with complex behavior and periodic boundary reversible cellular automata are used. We split each pixel of image into units of 4 bits, then adopt pseudorandom key stream generated by the intertwining logistic map to permute these units in confusion stage. And in diffusion stage, two-dimensional reversible cellular automata which are discrete dynamical systems are applied to iterate many rounds to achieve diffusion on bit-level, in which we only consider the higher 4 bits in a pixel because the higher 4 bits carry almost the information of an image. Theoretical analysis and experimental results demonstrate the proposed algorithm achieves a high security level and processes good performance against common attacks like differential attack and statistical attack. This algorithm belongs to the class of symmetric systems.

  13. Discrete rational and breather solution in the spatial discrete complex modified Korteweg-de Vries equation and continuous counterparts.

    PubMed

    Zhao, Hai-Qiong; Yu, Guo-Fu

    2017-04-01

    In this paper, a spatial discrete complex modified Korteweg-de Vries equation is investigated. The Lax pair, conservation laws, Darboux transformations, and breather and rational wave solutions to the semi-discrete system are presented. The distinguished feature of the model is that the discrete rational solution can possess new W-shape rational periodic-solitary waves that were not reported before. In addition, the first-order rogue waves reach peak amplitudes which are at least three times of the background amplitude, whereas their continuous counterparts are exactly three times the constant background. Finally, the integrability of the discrete system, including Lax pair, conservation laws, Darboux transformations, and explicit solutions, yields the counterparts of the continuous system in the continuum limit.

  14. Adaptive fuzzy leader clustering of complex data sets in pattern recognition

    NASA Technical Reports Server (NTRS)

    Newton, Scott C.; Pemmaraju, Surya; Mitra, Sunanda

    1992-01-01

    A modular, unsupervised neural network architecture for clustering and classification of complex data sets is presented. The adaptive fuzzy leader clustering (AFLC) architecture is a hybrid neural-fuzzy system that learns on-line in a stable and efficient manner. The initial classification is performed in two stages: a simple competitive stage and a distance metric comparison stage. The cluster prototypes are then incrementally updated by relocating the centroid positions from fuzzy C-means system equations for the centroids and the membership values. The AFLC algorithm is applied to the Anderson Iris data and laser-luminescent fingerprint image data. It is concluded that the AFLC algorithm successfully classifies features extracted from real data, discrete or continuous.

  15. Hybrid Pixel-Based Method for Cardiac Ultrasound Fusion Based on Integration of PCA and DWT.

    PubMed

    Mazaheri, Samaneh; Sulaiman, Puteri Suhaiza; Wirza, Rahmita; Dimon, Mohd Zamrin; Khalid, Fatimah; Moosavi Tayebi, Rohollah

    2015-01-01

    Medical image fusion is the procedure of combining several images from one or multiple imaging modalities. In spite of numerous attempts in direction of automation ventricle segmentation and tracking in echocardiography, due to low quality images with missing anatomical details or speckle noises and restricted field of view, this problem is a challenging task. This paper presents a fusion method which particularly intends to increase the segment-ability of echocardiography features such as endocardial and improving the image contrast. In addition, it tries to expand the field of view, decreasing impact of noise and artifacts and enhancing the signal to noise ratio of the echo images. The proposed algorithm weights the image information regarding an integration feature between all the overlapping images, by using a combination of principal component analysis and discrete wavelet transform. For evaluation, a comparison has been done between results of some well-known techniques and the proposed method. Also, different metrics are implemented to evaluate the performance of proposed algorithm. It has been concluded that the presented pixel-based method based on the integration of PCA and DWT has the best result for the segment-ability of cardiac ultrasound images and better performance in all metrics.

  16. Graph Laplacian Regularization for Image Denoising: Analysis in the Continuous Domain.

    PubMed

    Pang, Jiahao; Cheung, Gene

    2017-04-01

    Inverse imaging problems are inherently underdetermined, and hence, it is important to employ appropriate image priors for regularization. One recent popular prior-the graph Laplacian regularizer-assumes that the target pixel patch is smooth with respect to an appropriately chosen graph. However, the mechanisms and implications of imposing the graph Laplacian regularizer on the original inverse problem are not well understood. To address this problem, in this paper, we interpret neighborhood graphs of pixel patches as discrete counterparts of Riemannian manifolds and perform analysis in the continuous domain, providing insights into several fundamental aspects of graph Laplacian regularization for image denoising. Specifically, we first show the convergence of the graph Laplacian regularizer to a continuous-domain functional, integrating a norm measured in a locally adaptive metric space. Focusing on image denoising, we derive an optimal metric space assuming non-local self-similarity of pixel patches, leading to an optimal graph Laplacian regularizer for denoising in the discrete domain. We then interpret graph Laplacian regularization as an anisotropic diffusion scheme to explain its behavior during iterations, e.g., its tendency to promote piecewise smooth signals under certain settings. To verify our analysis, an iterative image denoising algorithm is developed. Experimental results show that our algorithm performs competitively with state-of-the-art denoising methods, such as BM3D for natural images, and outperforms them significantly for piecewise smooth images.

  17. Transport of phase space densities through tetrahedral meshes using discrete flow mapping

    NASA Astrophysics Data System (ADS)

    Bajars, Janis; Chappell, David J.; Søndergaard, Niels; Tanner, Gregor

    2017-01-01

    Discrete flow mapping was recently introduced as an efficient ray based method determining wave energy distributions in complex built up structures. Wave energy densities are transported along ray trajectories through polygonal mesh elements using a finite dimensional approximation of a ray transfer operator. In this way the method can be viewed as a smoothed ray tracing method defined over meshed surfaces. Many applications require the resolution of wave energy distributions in three-dimensional domains, such as in room acoustics, underwater acoustics and for electromagnetic cavity problems. In this work we extend discrete flow mapping to three-dimensional domains by propagating wave energy densities through tetrahedral meshes. The geometric simplicity of the tetrahedral mesh elements is utilised to efficiently compute the ray transfer operator using a mixture of analytic and spectrally accurate numerical integration. The important issue of how to choose a suitable basis approximation in phase space whilst maintaining a reasonable computational cost is addressed via low order local approximations on tetrahedral faces in the position coordinate and high order orthogonal polynomial expansions in momentum space.

  18. Minimizing finite-volume discretization errors on polyhedral meshes

    NASA Astrophysics Data System (ADS)

    Mouly, Quentin; Evrard, Fabien; van Wachem, Berend; Denner, Fabian

    2017-11-01

    Tetrahedral meshes are widely used in CFD to simulate flows in and around complex geometries, as automatic generation tools now allow tetrahedral meshes to represent arbitrary domains in a relatively accessible manner. Polyhedral meshes, however, are an increasingly popular alternative. While tetrahedron have at most four neighbours, the higher number of neighbours per polyhedral cell leads to a more accurate evaluation of gradients, essential for the numerical resolution of PDEs. The use of polyhedral meshes, nonetheless, introduces discretization errors for finite-volume methods: skewness and non-orthogonality, which occur with all sorts of unstructured meshes, as well as errors due to non-planar faces, specific to polygonal faces with more than three vertices. Indeed, polyhedral mesh generation algorithms cannot, in general, guarantee to produce meshes free of non-planar faces. The presented work focuses on the quantification and optimization of discretization errors on polyhedral meshes in the context of finite-volume methods. A quasi-Newton method is employed to optimize the relevant mesh quality measures. Various meshes are optimized and CFD results of cases with known solutions are presented to assess the improvements the optimization approach can provide.

  19. Preferential Amygdala Reactivity to the Negative Assessment of Neutral Faces

    PubMed Central

    Blasi, Giuseppe; Hariri, Ahmad R.; Alce, Guilna; Taurisano, Paolo; Sambataro, Fabio; Das, Saumitra; Bertolino, Alessandro; Weinberger, Daniel R.; Mattay, Venkata S.

    2010-01-01

    Background Prior studies suggest that the amygdala shapes complex behavioral responses to socially ambiguous cues. We explored human amygdala function during explicit behavioral decision making about discrete emotional facial expressions that can represent socially unambiguous and ambiguous cues. Methods During functional magnetic resonance imaging, 43 healthy adults were required to make complex social decisions (i.e., approach or avoid) about either relatively unambiguous (i.e., angry, fearful, happy) or ambiguous (i.e., neutral) facial expressions. Amygdala activation during this task was compared with that elicited by simple, perceptual decisions (sex discrimination) about the identical facial stimuli. Results Angry and fearful expressions were more frequently judged as avoidable and happy expressions most often as approachable. Neutral expressions were equally judged as avoidable and approachable. Reaction times to neutral expressions were longer than those to angry, fearful, and happy expressions during social judgment only. Imaging data on stimuli judged to be avoided revealed a significant task by emotion interaction in the amygdala. Here, only neutral facial expressions elicited greater activity during social judgment than during sex discrimination. Furthermore, during social judgment only, neutral faces judged to be avoided were associated with greater amygdala activity relative to neutral faces that were judged as approachable. Moreover, functional coupling between the amygdala and both dorsolateral prefrontal (social judgment > sex discrimination) and cingulate (sex discrimination > social judgment) cortices was differentially modulated by task during processing of neutral faces. Conclusions Our results suggest that increased amygdala reactivity and differential functional coupling with prefrontal circuitries may shape complex decisions and behavioral responses to socially ambiguous cues. PMID:19709644

  20. A Robust Image Watermarking in the Joint Time-Frequency Domain

    NASA Astrophysics Data System (ADS)

    Öztürk, Mahmut; Akan, Aydın; Çekiç, Yalçın

    2010-12-01

    With the rapid development of computers and internet applications, copyright protection of multimedia data has become an important problem. Watermarking techniques are proposed as a solution to copyright protection of digital media files. In this paper, a new, robust, and high-capacity watermarking method that is based on spatiofrequency (SF) representation is presented. We use the discrete evolutionary transform (DET) calculated by the Gabor expansion to represent an image in the joint SF domain. The watermark is embedded onto selected coefficients in the joint SF domain. Hence, by combining the advantages of spatial and spectral domain watermarking methods, a robust, invisible, secure, and high-capacity watermarking method is presented. A correlation-based detector is also proposed to detect and extract any possible watermarks on an image. The proposed watermarking method was tested on some commonly used test images under different signal processing attacks like additive noise, Wiener and Median filtering, JPEG compression, rotation, and cropping. Simulation results show that our method is robust against all of the attacks.

  1. Quasi-three-dimensional particle imaging with digital holography.

    PubMed

    Kemppinen, Osku; Heinson, Yuli; Berg, Matthew

    2017-05-01

    In this work, approximate three-dimensional structures of microparticles are generated with digital holography using an automated focus method. This is done by stacking a collection of silhouette-like images of a particle reconstructed from a single in-line hologram. The method enables estimation of the particle size in the longitudinal and transverse dimensions. Using the discrete dipole approximation, the method is tested computationally by simulating holograms for a variety of particles and attempting to reconstruct the known three-dimensional structure. It is found that poor longitudinal resolution strongly perturbs the reconstructed structure, yet the method does provide an approximate sense for the structure's longitudinal dimension. The method is then applied to laboratory measurements of holograms of single microparticles and their scattering patterns.

  2. Time Series Remote Sensing in Monitoring the Spatio-Temporal Dynamics of Plant Invasions: A Study of Invasive Saltcedar (Tamarix Spp.)

    NASA Astrophysics Data System (ADS)

    Diao, Chunyuan

    In today's big data era, the increasing availability of satellite and airborne platforms at various spatial and temporal scales creates unprecedented opportunities to understand the complex and dynamic systems (e.g., plant invasion). Time series remote sensing is becoming more and more important to monitor the earth system dynamics and interactions. To date, most of the time series remote sensing studies have been conducted with the images acquired at coarse spatial scale, due to their relatively high temporal resolution. The construction of time series at fine spatial scale, however, is limited to few or discrete images acquired within or across years. The objective of this research is to advance the time series remote sensing at fine spatial scale, particularly to shift from discrete time series remote sensing to continuous time series remote sensing. The objective will be achieved through the following aims: 1) Advance intra-annual time series remote sensing under the pure-pixel assumption; 2) Advance intra-annual time series remote sensing under the mixed-pixel assumption; 3) Advance inter-annual time series remote sensing in monitoring the land surface dynamics; and 4) Advance the species distribution model with time series remote sensing. Taking invasive saltcedar as an example, four methods (i.e., phenological time series remote sensing model, temporal partial unmixing method, multiyear spectral angle clustering model, and time series remote sensing-based spatially explicit species distribution model) were developed to achieve the objectives. Results indicated that the phenological time series remote sensing model could effectively map saltcedar distributions through characterizing the seasonal phenological dynamics of plant species throughout the year. The proposed temporal partial unmixing method, compared to conventional unmixing methods, could more accurately estimate saltcedar abundance within a pixel by exploiting the adequate temporal signatures of saltcedar. The multiyear spectral angle clustering model could guide the selection of the most representative remotely sensed image for repetitive saltcedar mapping over space and time. Through incorporating spatial autocorrelation, the species distribution model developed in the study could identify the suitable habitats of saltcedar at a fine spatial scale and locate appropriate areas at high risk of saltcedar infestation. Among 10 environmental variables, the distance to the river and the phenological attributes summarized by the time series remote sensing were regarded as the most important. These methods developed in the study provide new perspectives on how the continuous time series can be leveraged under various conditions to investigate the plant invasion dynamics.

  3. SPECIATION OF ORGANICS IN WATER WITH RAMAN SPECTROSCOPY: UTILITY OF IONIC STRENGTH VARIATION

    EPA Science Inventory

    We have developed and are applying an experimental and mathematical method for describing the micro-speciation of complex organic contaminants in aqueous media. For our case, micro-speciation can be defined as qualitative and quantitative identification of all discrete forms of ...

  4. Non-rigid image registration using graph-cuts.

    PubMed

    Tang, Tommy W H; Chung, Albert C S

    2007-01-01

    Non-rigid image registration is an ill-posed yet challenging problem due to its supernormal high degree of freedoms and inherent requirement of smoothness. Graph-cuts method is a powerful combinatorial optimization tool which has been successfully applied into image segmentation and stereo matching. Under some specific constraints, graph-cuts method yields either a global minimum or a local minimum in a strong sense. Thus, it is interesting to see the effects of using graph-cuts in non-rigid image registration. In this paper, we formulate non-rigid image registration as a discrete labeling problem. Each pixel in the source image is assigned a displacement label (which is a vector) indicating which position in the floating image it is spatially corresponding to. A smoothness constraint based on first derivative is used to penalize sharp changes in displacement labels across pixels. The whole system can be optimized by using the graph-cuts method via alpha-expansions. We compare 2D and 3D registration results of our method with two state-of-the-art approaches. It is found that our method is more robust to different challenging non-rigid registration cases with higher registration accuracy.

  5. Detection of cracks on concrete surfaces by hyperspectral image processing

    NASA Astrophysics Data System (ADS)

    Santos, Bruno O.; Valença, Jonatas; Júlio, Eduardo

    2017-06-01

    All large infrastructures worldwide must have a suitable monitoring and maintenance plan, aiming to evaluate their behaviour and predict timely interventions. In the particular case of concrete infrastructures, the detection and characterization of crack patterns is a major indicator of their structural response. In this scope, methods based on image processing have been applied and presented. Usually, methods focus on image binarization followed by applications of mathematical morphology to identify cracks on concrete surface. In most cases, publications are focused on restricted areas of concrete surfaces and in a single crack. On-site, the methods and algorithms have to deal with several factors that interfere with the results, namely dirt and biological colonization. Thus, the automation of a procedure for on-site characterization of crack patterns is of great interest. This advance may result in an effective tool to support maintenance strategies and interventions planning. This paper presents a research based on the analysis and processing of hyper-spectral images for detection and classification of cracks on concrete structures. The objective of the study is to evaluate the applicability of several wavelengths of the electromagnetic spectrum for classification of cracks in concrete surfaces. An image survey considering highly discretized wavelengths between 425 nm and 950 nm was performed on concrete specimens, with bandwidths of 25 nm. The concrete specimens were produced with a crack pattern induced by applying a load with displacement control. The tests were conducted to simulate usual on-site drawbacks. In this context, the surface of the specimen was subjected to biological colonization (leaves and moss). To evaluate the results and enhance crack patterns a clustering method, namely k-means algorithm, is being applied. The research conducted allows to define the suitability of using clustering k-means algorithm combined with hyper-spectral images highly discretized for crack detection on concrete surfaces, considering cracking combined with the most usual concrete anomalies, namely biological colonization.

  6. Comparison of Node-Centered and Cell-Centered Unstructured Finite-Volume Discretizations: Viscous Fluxes

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.; Nielsen, Eric J.; Nishikawa, Hiroaki; White, Jeffery A.

    2010-01-01

    Discretization of the viscous terms in current finite-volume unstructured-grid schemes are compared using node-centered and cell-centered approaches in two dimensions. Accuracy and complexity are studied for four nominally second-order accurate schemes: a node-centered scheme and three cell-centered schemes - a node-averaging scheme and two schemes with nearest-neighbor and adaptive compact stencils for least-square face gradient reconstruction. The grids considered range from structured (regular) grids to irregular grids composed of arbitrary mixtures of triangles and quadrilaterals, including random perturbations of the grid points to bring out the worst possible behavior of the solution. Two classes of tests are considered. The first class of tests involves smooth manufactured solutions on both isotropic and highly anisotropic grids with discontinuous metrics, typical of those encountered in grid adaptation. The second class concerns solutions and grids varying strongly anisotropically over a curved body, typical of those encountered in high-Reynolds number turbulent flow simulations. Tests from the first class indicate the face least-square methods, the node-averaging method without clipping, and the node-centered method demonstrate second-order convergence of discretization errors with very similar accuracies per degree of freedom. The tests of the second class are more discriminating. The node-centered scheme is always second order with an accuracy and complexity in linearization comparable to the best of the cell-centered schemes. In comparison, the cell-centered node-averaging schemes may degenerate on mixed grids, have a higher complexity in linearization, and can fail to converge to the exact solution when clipping of the node-averaged values is used. The cell-centered schemes using least-square face gradient reconstruction have more compact stencils with a complexity similar to that of the node-centered scheme. For simulations on highly anisotropic curved grids, the least-square methods have to be amended either by introducing a local mapping based on a distance function commonly available in practical schemes or modifying the scheme stencil to reflect the direction of strong coupling. The major conclusion is that accuracies of the node centered and the best cell-centered schemes are comparable at equivalent number of degrees of freedom.

  7. Method of conditional moments (MCM) for the Chemical Master Equation: a unified framework for the method of moments and hybrid stochastic-deterministic models.

    PubMed

    Hasenauer, J; Wolf, V; Kazeroonian, A; Theis, F J

    2014-09-01

    The time-evolution of continuous-time discrete-state biochemical processes is governed by the Chemical Master Equation (CME), which describes the probability of the molecular counts of each chemical species. As the corresponding number of discrete states is, for most processes, large, a direct numerical simulation of the CME is in general infeasible. In this paper we introduce the method of conditional moments (MCM), a novel approximation method for the solution of the CME. The MCM employs a discrete stochastic description for low-copy number species and a moment-based description for medium/high-copy number species. The moments of the medium/high-copy number species are conditioned on the state of the low abundance species, which allows us to capture complex correlation structures arising, e.g., for multi-attractor and oscillatory systems. We prove that the MCM provides a generalization of previous approximations of the CME based on hybrid modeling and moment-based methods. Furthermore, it improves upon these existing methods, as we illustrate using a model for the dynamics of stochastic single-gene expression. This application example shows that due to the more general structure, the MCM allows for the approximation of multi-modal distributions.

  8. $n$ -Dimensional Discrete Cat Map Generation Using Laplace Expansions.

    PubMed

    Wu, Yue; Hua, Zhongyun; Zhou, Yicong

    2016-11-01

    Different from existing methods that use matrix multiplications and have high computation complexity, this paper proposes an efficient generation method of n -dimensional ( [Formula: see text]) Cat maps using Laplace expansions. New parameters are also introduced to control the spatial configurations of the [Formula: see text] Cat matrix. Thus, the proposed method provides an efficient way to mix dynamics of all dimensions at one time. To investigate its implementations and applications, we further introduce a fast implementation algorithm of the proposed method with time complexity O(n 4 ) and a pseudorandom number generator using the Cat map generated by the proposed method. The experimental results show that, compared with existing generation methods, the proposed method has a larger parameter space and simpler algorithm complexity, generates [Formula: see text] Cat matrices with a lower inner correlation, and thus yields more random and unpredictable outputs of [Formula: see text] Cat maps.

  9. Applications of QCL mid-IR imaging to the advancement of pathology

    NASA Astrophysics Data System (ADS)

    Sreedhar, Hari; Varma, Vishal K.; Bird, Benjamin; Guzman, Grace; Walsh, Michael J.

    2017-03-01

    Quantum Cascade Laser (QCL) spectroscopic imaging is a novel technique with many potential applications to histopathology. Like traditional Fourier Transform Infrared (FT-IR) imaging, QCL spectroscopic imaging derives biochemical data coupled to the spatial information of a tissue sample, and can be used to improve the diagnostic and prognostic value of assessment of a tissue biopsy. This technique also offers advantages over traditional FT-IR imaging, specifically the capacity for discrete frequency and real-time imaging. In this work we present applications of QCL spectroscopic imaging to tissue samples, including discrete frequency imaging, to compare with FT-IR and its potential value to pathology.

  10. Method for enhancing single-trial P300 detection by introducing the complexity degree of image information in rapid serial visual presentation tasks

    PubMed Central

    Lin, Zhimin; Zeng, Ying; Tong, Li; Zhang, Hangming; Zhang, Chi

    2017-01-01

    The application of electroencephalogram (EEG) generated by human viewing images is a new thrust in image retrieval technology. A P300 component in the EEG is induced when the subjects see their point of interest in a target image under the rapid serial visual presentation (RSVP) experimental paradigm. We detected the single-trial P300 component to determine whether a subject was interested in an image. In practice, the latency and amplitude of the P300 component may vary in relation to different experimental parameters, such as target probability and stimulus semantics. Thus, we proposed a novel method, Target Recognition using Image Complexity Priori (TRICP) algorithm, in which the image information is introduced in the calculation of the interest score in the RSVP paradigm. The method combines information from the image and EEG to enhance the accuracy of single-trial P300 detection on the basis of traditional single-trial P300 detection algorithm. We defined an image complexity parameter based on the features of the different layers of a convolution neural network (CNN). We used the TRICP algorithm to compute for the complexity of an image to quantify the effect of different complexity images on the P300 components and training specialty classifier according to the image complexity. We compared TRICP with the HDCA algorithm. Results show that TRICP is significantly higher than the HDCA algorithm (Wilcoxon Sign Rank Test, p<0.05). Thus, the proposed method can be used in other and visual task-related single-trial event-related potential detection. PMID:29283998

  11. A Discrete Model for Color Naming

    NASA Astrophysics Data System (ADS)

    Menegaz, G.; Le Troter, A.; Sequeira, J.; Boi, J. M.

    2006-12-01

    The ability to associate labels to colors is very natural for human beings. Though, this apparently simple task hides very complex and still unsolved problems, spreading over many different disciplines ranging from neurophysiology to psychology and imaging. In this paper, we propose a discrete model for computational color categorization and naming. Starting from the 424 color specimens of the OSA-UCS set, we propose a fuzzy partitioning of the color space. Each of the 11 basic color categories identified by Berlin and Kay is modeled as a fuzzy set whose membership function is implicitly defined by fitting the model to the results of an ad hoc psychophysical experiment (Experiment 1). Each OSA-UCS sample is represented by a feature vector whose components are the memberships to the different categories. The discrete model consists of a three-dimensional Delaunay triangulation of the CIELAB color space which associates each OSA-UCS sample to a vertex of a 3D tetrahedron. Linear interpolation is used to estimate the membership values of any other point in the color space. Model validation is performed both directly, through the comparison of the predicted membership values to the subjective counterparts, as evaluated via another psychophysical test (Experiment 2), and indirectly, through the investigation of its exploitability for image segmentation. The model has proved to be successful in both cases, providing an estimation of the membership values in good agreement with the subjective measures as well as a semantically meaningful color-based segmentation map.

  12. A framework for grand scale parallelization of the combined finite discrete element method in 2d

    NASA Astrophysics Data System (ADS)

    Lei, Z.; Rougier, E.; Knight, E. E.; Munjiza, A.

    2014-09-01

    Within the context of rock mechanics, the Combined Finite-Discrete Element Method (FDEM) has been applied to many complex industrial problems such as block caving, deep mining techniques (tunneling, pillar strength, etc.), rock blasting, seismic wave propagation, packing problems, dam stability, rock slope stability, rock mass strength characterization problems, etc. The reality is that most of these were accomplished in a 2D and/or single processor realm. In this work a hardware independent FDEM parallelization framework has been developed using the Virtual Parallel Machine for FDEM, (V-FDEM). With V-FDEM, a parallel FDEM software can be adapted to different parallel architecture systems ranging from just a few to thousands of cores.

  13. Joint image and motion reconstruction for PET using a B-spline motion model.

    PubMed

    Blume, Moritz; Navab, Nassir; Rafecas, Magdalena

    2012-12-21

    We present a novel joint image and motion reconstruction method for PET. The method is based on gated data and reconstructs an image together with a motion function. The motion function can be used to transform the reconstructed image to any of the input gates. All available events (from all gates) are used in the reconstruction. The presented method uses a B-spline motion model, together with a novel motion regularization procedure that does not need a regularization parameter (which is usually extremely difficult to adjust). Several image and motion grid levels are used in order to reduce the reconstruction time. In a simulation study, the presented method is compared to a recently proposed joint reconstruction method. While the presented method provides comparable reconstruction quality, it is much easier to use since no regularization parameter has to be chosen. Furthermore, since the B-spline discretization of the motion function depends on fewer parameters than a displacement field, the presented method is considerably faster and consumes less memory than its counterpart. The method is also applied to clinical data, for which a novel purely data-driven gating approach is presented.

  14. Improved numerical methods for turbulent viscous flows aerothermal modeling program, phase 2

    NASA Technical Reports Server (NTRS)

    Karki, K. C.; Patankar, S. V.; Runchal, A. K.; Mongia, H. C.

    1988-01-01

    The details of a study to develop accurate and efficient numerical schemes to predict complex flows are described. In this program, several discretization schemes were evaluated using simple test cases. This assessment led to the selection of three schemes for an in-depth evaluation based on two-dimensional flows. The scheme with the superior overall performance was incorporated in a computer program for three-dimensional flows. To improve the computational efficiency, the selected discretization scheme was combined with a direct solution approach in which the fluid flow equations are solved simultaneously rather than sequentially.

  15. A novel iris patterns matching algorithm of weighted polar frequency correlation

    NASA Astrophysics Data System (ADS)

    Zhao, Weijie; Jiang, Linhua

    2014-11-01

    Iris recognition is recognized as one of the most accurate techniques for biometric authentication. In this paper, we present a novel correlation method - Weighted Polar Frequency Correlation(WPFC) - to match and evaluate two iris images, actually it can also be used for evaluating the similarity of any two images. The WPFC method is a novel matching and evaluating method for iris image matching, which is complete different from the conventional methods. For instance, the classical John Daugman's method of iris recognition uses 2D Gabor wavelets to extract features of iris image into a compact bit stream, and then matching two bit streams with hamming distance. Our new method is based on the correlation in the polar coordinate system in frequency domain with regulated weights. The new method is motivated by the observation that the pattern of iris that contains far more information for recognition is fine structure at high frequency other than the gross shapes of iris images. Therefore, we transform iris images into frequency domain and set different weights to frequencies. Then calculate the correlation of two iris images in frequency domain. We evaluate the iris images by summing the discrete correlation values with regulated weights, comparing the value with preset threshold to tell whether these two iris images are captured from the same person or not. Experiments are carried out on both CASIA database and self-obtained images. The results show that our method is functional and reliable. Our method provides a new prospect for iris recognition system.

  16. Asynchronous State Estimation for Discrete-Time Switched Complex Networks With Communication Constraints.

    PubMed

    Zhang, Dan; Wang, Qing-Guo; Srinivasan, Dipti; Li, Hongyi; Yu, Li

    2018-05-01

    This paper is concerned with the asynchronous state estimation for a class of discrete-time switched complex networks with communication constraints. An asynchronous estimator is designed to overcome the difficulty that each node cannot access to the topology/coupling information. Also, the event-based communication, signal quantization, and the random packet dropout problems are studied due to the limited communication resource. With the help of switched system theory and by resorting to some stochastic system analysis method, a sufficient condition is proposed to guarantee the exponential stability of estimation error system in the mean-square sense and a prescribed performance level is also ensured. The characterization of the desired estimator gains is derived in terms of the solution to a convex optimization problem. Finally, the effectiveness of the proposed design approach is demonstrated by a simulation example.

  17. Hybrid Automated Diagnosis of Discrete/Continuous Systems

    NASA Technical Reports Server (NTRS)

    Park, Han; James, Mark; MacKey, Ryan; Cannon, Howard; Bajwa, Anapa; Maul, William

    2007-01-01

    A recently conceived method of automated diagnosis of a complex electromechanical system affords a complete set of capabilities for hybrid diagnosis in the case in which the state of the electromechanical system is characterized by both continuous and discrete values (as represented by analog and digital signals, respectively). The method is an integration of two complementary diagnostic systems: (1) beacon-based exception analysis for multi-missions (BEAM), which is primarily useful in the continuous domain and easily performs diagnoses in the presence of transients; and (2) Livingstone, which is primarily useful in the discrete domain and is typically restricted to quasi-steady conditions. BEAM has been described in several prior NASA Tech Briefs articles: "Software for Autonomous Diagnosis of Complex Systems" (NPO-20803), Vol. 26, No. 3 (March 2002), page 33; "Beacon-Based Exception Analysis for Multimissions" (NPO-20827), Vol. 26, No. 9 (September 2002), page 32; "Wavelet-Based Real-Time Diagnosis of Complex Systems" (NPO-20830), Vol. 27, No. 1 (January 2003), page 67; and "Integrated Formulation of Beacon-Based Exception Analysis for Multimissions" (NPO-21126), Vol. 27, No. 3 (March 2003), page 74. Briefly, BEAM is a complete data-analysis method, implemented in software, for real-time or off-line detection and characterization of faults. The basic premise of BEAM is to characterize a system from all available observations and train the characterization with respect to normal phases of operation. The observations are primarily continuous in nature. BEAM isolates anomalies by analyzing the deviations from nominal for each phase of operation. Livingstone is a model-based reasoner that uses a model of a system, controller commands, and sensor observations to track the system s state, and detect and diagnose faults. Livingstone models a system within the discrete domain. Therefore, continuous sensor readings, as well as time, must be discretized. To reason about continuous systems, Livingstone uses monitors that discretize the sensor readings using trending and thresholding techniques. In development of the a hybrid method, BEAM results were sent to Livingstone to serve as an independent source of evidence that is in addition to the evidence gathered by Livingstone standard monitors. The figure depicts the flow of data in an early version of a hybrid system dedicated to diagnosing a simulated electromechanical system. In effect, BEAM served as a "smart" monitor for Livingstone. BEAM read the simulation data, processed the data to form observations, and stored the observations in a file. A monitor stub synchronized the events recorded by BEAM with the output of the Livingstone standard monitors according to time tags. This information was fed to a real-time interface, which buffered and fed the information to Livingstone, and requested diagnoses at the appropriate times. In a test, the hybrid system was found to correctly identify a failed component in an electromechanical system for which neither BEAM nor Livingstone alone yielded the correct diagnosis.

  18. Coloured computational imaging with single-pixel detectors based on a 2D discrete cosine transform

    NASA Astrophysics Data System (ADS)

    Liu, Bao-Lei; Yang, Zhao-Hua; Liu, Xia; Wu, Ling-An

    2017-02-01

    We propose and demonstrate a computational imaging technique that uses structured illumination based on a two-dimensional discrete cosine transform to perform imaging with a single-pixel detector. A scene is illuminated by a projector with two sets of orthogonal patterns, then by applying an inverse cosine transform to the spectra obtained from the single-pixel detector a full-colour image is retrieved. This technique can retrieve an image from sub-Nyquist measurements, and the background noise is easily cancelled to give excellent image quality. Moreover, the experimental set-up is very simple.

  19. Some fast elliptic solvers on parallel architectures and their complexities

    NASA Technical Reports Server (NTRS)

    Gallopoulos, E.; Saad, Y.

    1989-01-01

    The discretization of separable elliptic partial differential equations leads to linear systems with special block tridiagonal matrices. Several methods are known to solve these systems, the most general of which is the Block Cyclic Reduction (BCR) algorithm which handles equations with nonconstant coefficients. A method was recently proposed to parallelize and vectorize BCR. In this paper, the mapping of BCR on distributed memory architectures is discussed, and its complexity is compared with that of other approaches including the Alternating-Direction method. A fast parallel solver is also described, based on an explicit formula for the solution, which has parallel computational compelxity lower than that of parallel BCR.

  20. Pose Invariant Face Recognition Based on Hybrid Dominant Frequency Features

    NASA Astrophysics Data System (ADS)

    Wijaya, I. Gede Pasek Suta; Uchimura, Keiichi; Hu, Zhencheng

    Face recognition is one of the most active research areas in pattern recognition, not only because the face is a human biometric characteristics of human being but also because there are many potential applications of the face recognition which range from human-computer interactions to authentication, security, and surveillance. This paper presents an approach to pose invariant human face image recognition. The proposed scheme is based on the analysis of discrete cosine transforms (DCT) and discrete wavelet transforms (DWT) of face images. From both the DCT and DWT domain coefficients, which describe the facial information, we build compact and meaningful features vector, using simple statistical measures and quantization. This feature vector is called as the hybrid dominant frequency features. Then, we apply a combination of the L2 and Lq metric to classify the hybrid dominant frequency features to a person's class. The aim of the proposed system is to overcome the high memory space requirement, the high computational load, and the retraining problems of previous methods. The proposed system is tested using several face databases and the experimental results are compared to a well-known Eigenface method. The proposed method shows good performance, robustness, stability, and accuracy without requiring geometrical normalization. Furthermore, the purposed method has low computational cost, requires little memory space, and can overcome retraining problem.

  1. Removal of intensity bias in magnitude spin-echo MRI images by nonlinear diffusion filtering

    NASA Astrophysics Data System (ADS)

    Samsonov, Alexei A.; Johnson, Chris R.

    2004-05-01

    MRI data analysis is routinely done on the magnitude part of complex images. While both real and imaginary image channels contain Gaussian noise, magnitude MRI data are characterized by Rice distribution. However, conventional filtering methods often assume image noise to be zero mean and Gaussian distributed. Estimation of an underlying image using magnitude data produces biased result. The bias may lead to significant image errors, especially in areas of low signal-to-noise ratio (SNR). The incorporation of the Rice PDF into a noise filtering procedure can significantly complicate the method both algorithmically and computationally. In this paper, we demonstrate that inherent image phase smoothness of spin-echo MRI images could be utilized for separate filtering of real and imaginary complex image channels to achieve unbiased image denoising. The concept is demonstrated with a novel nonlinear diffusion filtering scheme developed for complex image filtering. In our proposed method, the separate diffusion processes are coupled through combined diffusion coefficients determined from the image magnitude. The new method has been validated with simulated and real MRI data. The new method has provided efficient denoising and bias removal in conventional and black-blood angiography MRI images obtained using fast spin echo acquisition protocols.

  2. High-resolution imaging using a wideband MIMO radar system with two distributed arrays.

    PubMed

    Wang, Dang-wei; Ma, Xiao-yan; Chen, A-Lei; Su, Yi

    2010-05-01

    Imaging a fast maneuvering target has been an active research area in past decades. Usually, an array antenna with multiple elements is implemented to avoid the motion compensations involved in the inverse synthetic aperture radar (ISAR) imaging. Nevertheless, there is a price dilemma due to the high level of hardware complexity compared to complex algorithm implemented in the ISAR imaging system with only one antenna. In this paper, a wideband multiple-input multiple-output (MIMO) radar system with two distributed arrays is proposed to reduce the hardware complexity of the system. Furthermore, the system model, the equivalent array production method and the imaging procedure are presented. As compared with the classical real aperture radar (RAR) imaging system, there is a very important contribution in our method that the lower hardware complexity can be involved in the imaging system since many additive virtual array elements can be obtained. Numerical simulations are provided for testing our system and imaging method.

  3. Small-kernel, constrained least-squares restoration of sampled image data

    NASA Technical Reports Server (NTRS)

    Hazra, Rajeeb; Park, Stephen K.

    1992-01-01

    Following the work of Park (1989), who extended a derivation of the Wiener filter based on the incomplete discrete/discrete model to a more comprehensive end-to-end continuous/discrete/continuous model, it is shown that a derivation of the constrained least-squares (CLS) filter based on the discrete/discrete model can also be extended to this more comprehensive continuous/discrete/continuous model. This results in an improved CLS restoration filter, which can be efficiently implemented as a small-kernel convolution in the spatial domain.

  4. A SEMI-LAGRANGIAN TWO-LEVEL PRECONDITIONED NEWTON-KRYLOV SOLVER FOR CONSTRAINED DIFFEOMORPHIC IMAGE REGISTRATION.

    PubMed

    Mang, Andreas; Biros, George

    2017-01-01

    We propose an efficient numerical algorithm for the solution of diffeomorphic image registration problems. We use a variational formulation constrained by a partial differential equation (PDE), where the constraints are a scalar transport equation. We use a pseudospectral discretization in space and second-order accurate semi-Lagrangian time stepping scheme for the transport equations. We solve for a stationary velocity field using a preconditioned, globalized, matrix-free Newton-Krylov scheme. We propose and test a two-level Hessian preconditioner. We consider two strategies for inverting the preconditioner on the coarse grid: a nested preconditioned conjugate gradient method (exact solve) and a nested Chebyshev iterative method (inexact solve) with a fixed number of iterations. We test the performance of our solver in different synthetic and real-world two-dimensional application scenarios. We study grid convergence and computational efficiency of our new scheme. We compare the performance of our solver against our initial implementation that uses the same spatial discretization but a standard, explicit, second-order Runge-Kutta scheme for the numerical time integration of the transport equations and a single-level preconditioner. Our improved scheme delivers significant speedups over our original implementation. As a highlight, we observe a 20 × speedup for a two dimensional, real world multi-subject medical image registration problem.

  5. TV-based conjugate gradient method and discrete L-curve for few-view CT reconstruction of X-ray in vivo data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiaoli; Hofmann, Ralf; Dapp, Robin

    2015-01-01

    High-resolution, three-dimensional (3D) imaging of soft tissues requires the solution of two inverse problems: phase retrieval and the reconstruction of the 3D image from a tomographic stack of two-dimensional (2D) projections. The number of projections per stack should be small to accommodate fast tomography of rapid processes and to constrain X-ray radiation dose to optimal levels to either increase the duration of in vivo time-lapse series at a given goal for spatial resolution and/or the conservation of structure under X-ray irradiation. In pursuing the 3D reconstruction problem in the sense of compressive sampling theory, we propose to reduce the numbermore » of projections by applying an advanced algebraic technique subject to the minimisation of the total variation (TV) in the reconstructed slice. This problem is formulated in a Lagrangian multiplier fashion with the parameter value determined by appealing to a discrete L-curve in conjunction with a conjugate gradient method. The usefulness of this reconstruction modality is demonstrated for simulated and in vivo data, the latter acquired in parallel-beam imaging experiments using synchrotron radiation. (C) 2015 Optical Society of America« less

  6. Analysis of discrete and continuous distributions of ventilatory time constants from dynamic computed tomography

    NASA Astrophysics Data System (ADS)

    Doebrich, Marcus; Markstaller, Klaus; Karmrodt, Jens; Kauczor, Hans-Ulrich; Eberle, Balthasar; Weiler, Norbert; Thelen, Manfred; Schreiber, Wolfgang G.

    2005-04-01

    In this study, an algorithm was developed to measure the distribution of pulmonary time constants (TCs) from dynamic computed tomography (CT) data sets during a sudden airway pressure step up. Simulations with synthetic data were performed to test the methodology as well as the influence of experimental noise. Furthermore the algorithm was applied to in vivo data. In five pigs sudden changes in airway pressure were imposed during dynamic CT acquisition in healthy lungs and in a saline lavage ARDS model. The fractional gas content in the imaged slice (FGC) was calculated by density measurements for each CT image. Temporal variations of the FGC were analysed assuming a model with a continuous distribution of exponentially decaying time constants. The simulations proved the feasibility of the method. The influence of experimental noise could be well evaluated. Analysis of the in vivo data showed that in healthy lungs ventilation processes can be more likely characterized by discrete TCs whereas in ARDS lungs continuous distributions of TCs are observed. The temporal behaviour of lung inflation and deflation can be characterized objectively using the described new methodology. This study indicates that continuous distributions of TCs reflect lung ventilation mechanics more accurately compared to discrete TCs.

  7. Nitsche’s Method For Helmholtz Problems with Embedded Interfaces

    PubMed Central

    Zou, Zilong; Aquino, Wilkins; Harari, Isaac

    2016-01-01

    SUMMARY In this work, we use Nitsche’s formulation to weakly enforce kinematic constraints at an embedded interface in Helmholtz problems. Allowing embedded interfaces in a mesh provides significant ease for discretization, especially when material interfaces have complex geometries. We provide analytical results that establish the well-posedness of Helmholtz variational problems and convergence of the corresponding finite element discretizations when Nitsche’s method is used to enforce kinematic constraints. As in the analysis of conventional Helmholtz problems, we show that the inf-sup constant remains positive provided that the Nitsche’s stabilization parameter is judiciously chosen. We then apply our formulation to several 2D plane-wave examples that confirm our analytical findings. Doing so, we demonstrate the asymptotic convergence of the proposed method and show that numerical results are in accordance with the theoretical analysis. PMID:28713177

  8. Shape functions for velocity interpolation in general hexahedral cells

    USGS Publications Warehouse

    Naff, R.L.; Russell, T.F.; Wilson, J.D.

    2002-01-01

    Numerical methods for grids with irregular cells require discrete shape functions to approximate the distribution of quantities across cells. For control-volume mixed finite-element (CVMFE) methods, vector shape functions approximate velocities and vector test functions enforce a discrete form of Darcy's law. In this paper, a new vector shape function is developed for use with irregular, hexahedral cells (trilinear images of cubes). It interpolates velocities and fluxes quadratically, because as shown here, the usual Piola-transformed shape functions, which interpolate linearly, cannot match uniform flow on general hexahedral cells. Truncation-error estimates for the shape function are demonstrated. CVMFE simulations of uniform and non-uniform flow with irregular meshes show first- and second-order convergence of fluxes in the L2 norm in the presence and absence of singularities, respectively.

  9. Application of Finite Element Modeling Methods in Magnetic Resonance Imaging-Based Research and Clinical Management

    NASA Astrophysics Data System (ADS)

    Fwu, Peter Tramyeon

    The medical image is very complex by its nature. Modeling built upon the medical image is challenging due to the lack of analytical solution. Finite element method (FEM) is a numerical technique which can be used to solve the partial differential equations. It utilized the transformation from a continuous domain into solvable discrete sub-domains. In three-dimensional space, FEM has the capability dealing with complicated structure and heterogeneous interior. That makes FEM an ideal tool to approach the medical-image based modeling problems. In this study, I will address the three modeling in (1) photon transport inside the human breast by implanting the radiative transfer equation to simulate the diffuse optical spectroscopy imaging (DOSI) in order to measurement the percent density (PD), which has been proven as a cancer risk factor in mammography. Our goal is to use MRI as the ground truth to optimize the DOSI scanning protocol to get a consistent measurement of PD. Our result shows DOSI measurement is position and depth dependent and proper scanning scheme and body configuration are needed; (2) heat flow in the prostate by implementing the Penne's bioheat equation to evaluate the cooling performance of regional hypothermia during the robot assisted radical prostatectomy for the individual patient in order to achieve the optimal cooling setting. Four factors are taken into account during the simulation: blood abundance, artery perfusion, cooling balloon temperature, and the anatomical distance. The result shows that blood abundance, prostate size, and anatomical distance are significant factors to the equilibrium temperature of neurovascular bundle; (3) shape analysis in hippocampus by using the radial distance mapping, and two registration methods to find the correlation between sub-regional change to the age and cognition performance, which might not reveal in the volumetric analysis. The result gives a fundamental knowledge of normal distribution in young preadolescent children who may be compared to children with, or at risk of, neurological diseases for early diagnosis.

  10. Trajectory-Oriented Approach to Managing Traffic Complexity: Trajectory Flexibility Metrics and Algorithms and Preliminary Complexity Impact Assessment

    NASA Technical Reports Server (NTRS)

    Idris, Husni; Vivona, Robert A.; Al-Wakil, Tarek

    2009-01-01

    This document describes exploratory research on a distributed, trajectory oriented approach for traffic complexity management. The approach is to manage traffic complexity based on preserving trajectory flexibility and minimizing constraints. In particular, the document presents metrics for trajectory flexibility; a method for estimating these metrics based on discrete time and degree of freedom assumptions; a planning algorithm using these metrics to preserve flexibility; and preliminary experiments testing the impact of preserving trajectory flexibility on traffic complexity. The document also describes an early demonstration capability of the trajectory flexibility preservation function in the NASA Autonomous Operations Planner (AOP) platform.

  11. A finite elements method to solve the Bloch–Torrey equation applied to diffusion magnetic resonance imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Dang Van; NeuroSpin, Bat145, Point Courrier 156, CEA Saclay Center, 91191 Gif-sur-Yvette Cedex; Li, Jing-Rebecca, E-mail: jingrebecca.li@inria.fr

    2014-04-15

    The complex transverse water proton magnetization subject to diffusion-encoding magnetic field gradient pulses in a heterogeneous medium can be modeled by the multiple compartment Bloch–Torrey partial differential equation (PDE). In addition, steady-state Laplace PDEs can be formulated to produce the homogenized diffusion tensor that describes the diffusion characteristics of the medium in the long time limit. In spatial domains that model biological tissues at the cellular level, these two types of PDEs have to be completed with permeability conditions on the cellular interfaces. To solve these PDEs, we implemented a finite elements method that allows jumps in the solution atmore » the cell interfaces by using double nodes. Using a transformation of the Bloch–Torrey PDE we reduced oscillations in the searched-for solution and simplified the implementation of the boundary conditions. The spatial discretization was then coupled to the adaptive explicit Runge–Kutta–Chebyshev time-stepping method. Our proposed method is second order accurate in space and second order accurate in time. We implemented this method on the FEniCS C++ platform and show time and spatial convergence results. Finally, this method is applied to study some relevant questions in diffusion MRI.« less

  12. Covalent heterogenization of discrete bis(8-quinolinolato)dioxomolybdenum(VI) and dioxotungsten(VI) complexes by a metal-template/metal-exchange method: Cyclooctene epoxidation catalysts with enhanced performances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Ying; Chattopadhyay, Soma; Shibata, Tomohiro

    A metal-template/metal-exchange method was used to imprint covalently attached bis(8- quinolinolato)dioxomolybdenum(VI) and dioxotungsten(VI) complexes onto large surface-area, mesoporous SBA-15 silica to obtain discrete MoO2 VIT and WO2 VIT catalysts bearing different metal loadings, respectively. Homogeneous counterparts, MoO2 VIN and WO2 VIN, as well as randomly ligandgrafted heterogeneous analogues, MoO2 VIG and WO2 VIG, were also prepared for comparison. X-ray absorption fine structure (XAFS), pair distribution function (PDF) and UV–vis data demonstrate that MoO2 VIT and WO2 VIT adopt a more solution-like bis(8-quinolinol) coordination environment than MoO2 VIG and WO2 VIG, respectively. Correspondingly, the templated MoVI and WVI catalysts show superiormore » performances to their randomly grafted counterparts and neat analogues in the epoxidation of cyclooctene. It is found that the representative MoO2 VIT-10% catalyst can be recycled up to five times without significant loss of reactivity, and heterogeneity test confirms the high stability of MoO2 VIT-10% catalyst against leaching of active species into solution. The homogeneity of the discrete bis(8-quinolinol) metal spheres templated on SBA-15 should be responsible for the superior performances.« less

  13. Mingus Discontinuous Multiphysics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pat Notz, Dan Turner

    Mingus provides hybrid coupled local/non-local mechanics analysis capabilities that extend several traditional methods to applications with inherent discontinuities. Its primary features include adaptations of solid mechanics, fluid dynamics and digital image correlation that naturally accommodate dijointed data or irregular solution fields by assimilating a variety of discretizations (such as control volume finite elements, peridynamics and meshless control point clouds). The goal of this software is to provide an analysis framework form multiphysics engineering problems with an integrated image correlation capability that can be used for experimental validation and model

  14. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images

    PubMed Central

    Afshar, Yaser; Sbalzarini, Ivo F.

    2016-01-01

    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 1010 pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments. PMID:27046144

  15. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images.

    PubMed

    Afshar, Yaser; Sbalzarini, Ivo F

    2016-01-01

    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 10(10) pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments.

  16. Bounded Error Schemes for the Wave Equation on Complex Domains

    NASA Technical Reports Server (NTRS)

    Abarbanel, Saul; Ditkowski, Adi; Yefet, Amir

    1998-01-01

    This paper considers the application of the method of boundary penalty terms ("SAT") to the numerical solution of the wave equation on complex shapes with Dirichlet boundary conditions. A theory is developed, in a semi-discrete setting, that allows the use of a Cartesian grid on complex geometries, yet maintains the order of accuracy with only a linear temporal error-bound. A numerical example, involving the solution of Maxwell's equations inside a 2-D circular wave-guide demonstrates the efficacy of this method in comparison to others (e.g. the staggered Yee scheme) - we achieve a decrease of two orders of magnitude in the level of the L2-error.

  17. Linearized radiative transfer models for retrieval of cloud parameters from EPIC/DSCOVR measurements

    NASA Astrophysics Data System (ADS)

    Molina García, Víctor; Sasi, Sruthy; Efremenko, Dmitry S.; Doicu, Adrian; Loyola, Diego

    2018-07-01

    In this paper, we describe several linearized radiative transfer models which can be used for the retrieval of cloud parameters from EPIC (Earth Polychromatic Imaging Camera) measurements. The approaches under examination are (1) the linearized forward approach, represented in this paper by the linearized discrete ordinate and matrix operator methods with matrix exponential, and (2) the forward-adjoint approach based on the discrete ordinate method with matrix exponential. To enhance the performance of the radiative transfer computations, the correlated k-distribution method and the Principal Component Analysis (PCA) technique are used. We provide a compact description of the proposed methods, as well as a numerical analysis of their accuracy and efficiency when simulating EPIC measurements in the oxygen A-band channel at 764 nm. We found that the computation time of the forward-adjoint approach using the correlated k-distribution method in conjunction with PCA is approximately 13 s for simultaneously computing the derivatives with respect to cloud optical thickness and cloud top height.

  18. Reference point detection for camera-based fingerprint image based on wavelet transformation.

    PubMed

    Khalil, Mohammed S

    2015-04-30

    Fingerprint recognition systems essentially require core-point detection prior to fingerprint matching. The core-point is used as a reference point to align the fingerprint with a template database. When processing a larger fingerprint database, it is necessary to consider the core-point during feature extraction. Numerous core-point detection methods are available and have been reported in the literature. However, these methods are generally applied to scanner-based images. Hence, this paper attempts to explore the feasibility of applying a core-point detection method to a fingerprint image obtained using a camera phone. The proposed method utilizes a discrete wavelet transform to extract the ridge information from a color image. The performance of proposed method is evaluated in terms of accuracy and consistency. These two indicators are calculated automatically by comparing the method's output with the defined core points. The proposed method is tested on two data sets, controlled and uncontrolled environment, collected from 13 different subjects. In the controlled environment, the proposed method achieved a detection rate 82.98%. In uncontrolled environment, the proposed method yield a detection rate of 78.21%. The proposed method yields promising results in a collected-image database. Moreover, the proposed method outperformed compare to existing method.

  19. Discrete is it enough? The revival of Piola-Hencky keynotes to analyze three-dimensional Elastica

    NASA Astrophysics Data System (ADS)

    Turco, Emilio

    2018-04-01

    Complex problems such as those concerning the mechanics of materials can be confronted only by considering numerical simulations. Analytical methods are useful to build guidelines or reference solutions but, for general cases of technical interest, they have to be solved numerically, especially in the case of large displacements and deformations. Probably continuous models arose for producing inspiring examples and stemmed from homogenization techniques. These techniques allowed for the solution of some paradigmatic examples but, in general, always require a discretization method for solving problems dictated by the applications. Therefore, and also by taking into account that computing powers are nowadays more largely available and cheap, the question arises: why not using directly a discrete model for 3D beams? In other words, it could be interesting to formulate a discrete model without using an intermediate continuum one, as this last, at the end, has to be discretized in any case. These simple considerations immediately evoke some very basic models developed many years ago when the computing powers were practically inexistent but the problem of finding simple solutions to beam deformation problem was already an emerging one. Actually, in recent years, the keynotes of Hencky and Piola attracted a renewed attention [see, one for all, the work (Turco et al. in Zeitschrift für Angewandte Mathematik und Physik 67(4):1-28, 2016)]: generalizing their results, in the present paper, a novel directly discrete three-dimensional beam model is presented and discussed, in the framework of geometrically nonlinear analysis. Using a stepwise algorithm based essentially on Newton's method to compute the extrapolations and on the Riks' arc-length method to perform the corrections, we could obtain some numerical simulations showing the computational effectiveness of presented model: Indeed, it presents a convenient balance between accuracy and computational cost.

  20. Determination of Diffusion Profiles in Altered Wellbore Cement Using X-ray Computed Tomography Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mason, Harris E.; Walsh, Stuart D. C.; DuFrane, Wyatt L.

    2014-06-17

    The development of accurate, predictive models for use in determining wellbore integrity requires detailed information about the chemical and mechanical changes occurring in hardened Portland cements. X-ray computed tomography (XRCT) provides a method that can nondestructively probe these changes in three dimensions. Here, we describe a method for extracting subvoxel mineralogical and chemical information from synchrotron XRCT images by combining advanced image segmentation with geochemical models of cement alteration. The method relies on determining “effective linear activity coefficients” (ELAC) for the white light source to generate calibration curves that relate the image grayscales to material composition. The resulting data setmore » supports the modeling of cement alteration by CO 2-rich brine with discrete increases in calcium concentration at reaction boundaries. The results of these XRCT analyses can be used to further improve coupled geochemical and mechanical models of cement alteration in the wellbore environment.« less

  1. Time-domain damping models in structural acoustics using digital filtering

    NASA Astrophysics Data System (ADS)

    Parret-Fréaud, Augustin; Cotté, Benjamin; Chaigne, Antoine

    2016-02-01

    This paper describes a new approach in order to formulate well-posed time-domain damping models able to represent various frequency domain profiles of damping properties. The novelty of this approach is to represent the behavior law of a given material directly in a discrete-time framework as a digital filter, which is synthesized for each material from a discrete set of frequency-domain data such as complex modulus through an optimization process. A key point is the addition of specific constraints to this process in order to guarantee stability, causality and verification of thermodynamics second law when transposing the resulting discrete-time behavior law into the time domain. Thus, this method offers a framework which is particularly suitable for time-domain simulations in structural dynamics and acoustics for a wide range of materials (polymers, wood, foam, etc.), allowing to control and even reduce the distortion effects induced by time-discretization schemes on the frequency response of continuous-time behavior laws.

  2. Radiative transfer models for retrieval of cloud parameters from EPIC/DSCOVR measurements

    NASA Astrophysics Data System (ADS)

    Molina García, Víctor; Sasi, Sruthy; Efremenko, Dmitry S.; Doicu, Adrian; Loyola, Diego

    2018-07-01

    In this paper we analyze the accuracy and efficiency of several radiative transfer models for inferring cloud parameters from radiances measured by the Earth Polychromatic Imaging Camera (EPIC) on board the Deep Space Climate Observatory (DSCOVR). The radiative transfer models are the exact discrete ordinate and matrix operator methods with matrix exponential, and the approximate asymptotic and equivalent Lambertian cloud models. To deal with the computationally expensive radiative transfer calculations, several acceleration techniques such as, for example, the telescoping technique, the method of false discrete ordinate, the correlated k-distribution method and the principal component analysis (PCA) are used. We found that, for the EPIC oxygen A-band absorption channel at 764 nm, the exact models using the correlated k-distribution in conjunction with PCA yield an accuracy better than 1.5% and a computation time of 18 s for radiance calculations at 5 viewing zenith angles.

  3. A fictitious domain method for fluid/solid coupling applied to the lithosphere/asthenosphere interaction.

    NASA Astrophysics Data System (ADS)

    Cerpa, Nestor; Hassani, Riad; Gerbault, Muriel

    2014-05-01

    A large variety of geodynamical problems can be viewed as a solid/fluid interaction problem coupling two bodies with different physics. In particular the lithosphere/asthenosphere mechanical interaction in subduction zones belongs to this kind of problem, where the solid lithosphere is embedded in the asthenospheric viscous fluid. In many fields (Industry, Civil Engineering,etc.), in which deformations of solid and fluid are "small", numerical modelers consider the exact discretization of both domains and fit as well as possible the shape of the interface between the two domains, solving the discretized physic problems by the Finite Element Method (FEM). Although, in a context of subduction, the lithosphere is submitted to large deformation, and can evolve into a complex geometry, thus leading to important deformation of the surrounding asthenosphere. To alleviate the precise meshing of complex geometries, numerical modelers have developed non-matching interface methods called Fictitious Domain Methods (FDM). The main idea of these methods is to extend the initial problem to a bigger (and simpler) domain. In our version of FDM, we determine the forces at the immersed solid boundary required to minimize (at the least square sense) the difference between fluid and solid velocities at this interface. This method is first-order accurate and the stability depends on the ratio between the fluid background mesh size and the interface discretization. We present the formulation and provide benchmarks and examples showing the potential of the method : 1) A comparison with an analytical solution of a viscous flow around a rigid body. 2) An experiment of a rigid sphere sinking in a viscous fluid (in two and three dimensional cases). 3) A comparison with an analog subduction experiment. Another presentation aims at describing the geodynamical application of this method to Andean subduction dynamics, studying cyclic slab folding on the 660 km discontinuity, and its relationship with flat subduction.

  4. Use of zerotree coding in a high-speed pyramid image multiresolution decomposition

    NASA Astrophysics Data System (ADS)

    Vega-Pineda, Javier; Cabrera, Sergio D.; Lucero, Aldo

    1995-03-01

    A Zerotree (ZT) coding scheme is applied as a post-processing stage to avoid transmitting zero data in the High-Speed Pyramid (HSP) image compression algorithm. This algorithm has features that increase the capability of the ZT coding to give very high compression rates. In this paper the impact of the ZT coding scheme is analyzed and quantified. The HSP algorithm creates a discrete-time multiresolution analysis based on a hierarchical decomposition technique that is a subsampling pyramid. The filters used to create the image residues and expansions can be related to wavelet representations. According to the pixel coordinates and the level in the pyramid, N2 different wavelet basis functions of various sizes and rotations are linearly combined. The HSP algorithm is computationally efficient because of the simplicity of the required operations, and as a consequence, it can be very easily implemented with VLSI hardware. This is the HSP's principal advantage over other compression schemes. The ZT coding technique transforms the different quantized image residual levels created by the HSP algorithm into a bit stream. The use of ZT's compresses even further the already compressed image taking advantage of parent-child relationships (trees) between the pixels of the residue images at different levels of the pyramid. Zerotree coding uses the links between zeros along the hierarchical structure of the pyramid, to avoid transmission of those that form branches of all zeros. Compression performance and algorithm complexity of the combined HSP-ZT method are compared with those of the JPEG standard technique.

  5. Parallel Cartesian grid refinement for 3D complex flow simulations

    NASA Astrophysics Data System (ADS)

    Angelidis, Dionysios; Sotiropoulos, Fotis

    2013-11-01

    A second order accurate method for discretizing the Navier-Stokes equations on 3D unstructured Cartesian grids is presented. Although the grid generator is based on the oct-tree hierarchical method, fully unstructured data-structure is adopted enabling robust calculations for incompressible flows, avoiding both the need of synchronization of the solution between different levels of refinement and usage of prolongation/restriction operators. The current solver implements a hybrid staggered/non-staggered grid layout, employing the implicit fractional step method to satisfy the continuity equation. The pressure-Poisson equation is discretized by using a novel second order fully implicit scheme for unstructured Cartesian grids and solved using an efficient Krylov subspace solver. The momentum equation is also discretized with second order accuracy and the high performance Newton-Krylov method is used for integrating them in time. Neumann and Dirichlet conditions are used to validate the Poisson solver against analytical functions and grid refinement results to a significant reduction of the solution error. The effectiveness of the fractional step method results in the stability of the overall algorithm and enables the performance of accurate multi-resolution real life simulations. This material is based upon work supported by the Department of Energy under Award Number DE-EE0005482.

  6. A New Ghost Cell/Level Set Method for Moving Boundary Problems: Application to Tumor Growth

    PubMed Central

    Macklin, Paul

    2011-01-01

    In this paper, we present a ghost cell/level set method for the evolution of interfaces whose normal velocity depend upon the solutions of linear and nonlinear quasi-steady reaction-diffusion equations with curvature-dependent boundary conditions. Our technique includes a ghost cell method that accurately discretizes normal derivative jump boundary conditions without smearing jumps in the tangential derivative; a new iterative method for solving linear and nonlinear quasi-steady reaction-diffusion equations; an adaptive discretization to compute the curvature and normal vectors; and a new discrete approximation to the Heaviside function. We present numerical examples that demonstrate better than 1.5-order convergence for problems where traditional ghost cell methods either fail to converge or attain at best sub-linear accuracy. We apply our techniques to a model of tumor growth in complex, heterogeneous tissues that consists of a nonlinear nutrient equation and a pressure equation with geometry-dependent jump boundary conditions. We simulate the growth of glioblastoma (an aggressive brain tumor) into a large, 1 cm square of brain tissue that includes heterogeneous nutrient delivery and varied biomechanical characteristics (white matter, gray matter, cerebrospinal fluid, and bone), and we observe growth morphologies that are highly dependent upon the variations of the tissue characteristics—an effect observed in real tumor growth. PMID:21331304

  7. Stochastic dynamics of time correlation in complex systems with discrete time

    NASA Astrophysics Data System (ADS)

    Yulmetyev, Renat; Hänggi, Peter; Gafarov, Fail

    2000-11-01

    In this paper we present the concept of description of random processes in complex systems with discrete time. It involves the description of kinetics of discrete processes by means of the chain of finite-difference non-Markov equations for time correlation functions (TCFs). We have introduced the dynamic (time dependent) information Shannon entropy Si(t) where i=0,1,2,3,..., as an information measure of stochastic dynamics of time correlation (i=0) and time memory (i=1,2,3,...). The set of functions Si(t) constitute the quantitative measure of time correlation disorder (i=0) and time memory disorder (i=1,2,3,...) in complex system. The theory developed started from the careful analysis of time correlation involving dynamics of vectors set of various chaotic states. We examine two stochastic processes involving the creation and annihilation of time correlation (or time memory) in details. We carry out the analysis of vectors' dynamics employing finite-difference equations for random variables and the evolution operator describing their natural motion. The existence of TCF results in the construction of the set of projection operators by the usage of scalar product operation. Harnessing the infinite set of orthogonal dynamic random variables on a basis of Gram-Shmidt orthogonalization procedure tends to creation of infinite chain of finite-difference non-Markov kinetic equations for discrete TCFs and memory functions (MFs). The solution of the equations above thereof brings to the recurrence relations between the TCF and MF of senior and junior orders. This offers new opportunities for detecting the frequency spectra of power of entropy function Si(t) for time correlation (i=0) and time memory (i=1,2,3,...). The results obtained offer considerable scope for attack on stochastic dynamics of discrete random processes in a complex systems. Application of this technique on the analysis of stochastic dynamics of RR intervals from human ECG's shows convincing evidence for a non-Markovian phenomemena associated with a peculiarities in short- and long-range scaling. This method may be of use in distinguishing healthy from pathologic data sets based in differences in these non-Markovian properties.

  8. Imaging agents for in vivo magnetic resonance and scintigraphic imaging

    DOEpatents

    Engelstad, Barry L.; Raymond, Kenneth N.; Huberty, John P.; White, David L.

    1991-01-01

    Methods are provided for in vivo magnetic resonance imaging and/or scintigraphic imaging of a subject using chelated transition metal and lanthanide metal complexes. Novel ligands for these complexes are provided.

  9. Area and power efficient DCT architecture for image compression

    NASA Astrophysics Data System (ADS)

    Dhandapani, Vaithiyanathan; Ramachandran, Seshasayanan

    2014-12-01

    The discrete cosine transform (DCT) is one of the major components in image and video compression systems. The final output of these systems is interpreted by the human visual system (HVS), which is not perfect. The limited perception of human visualization allows the algorithm to be numerically approximate rather than exact. In this paper, we propose a new matrix for discrete cosine transform. The proposed 8 × 8 transformation matrix contains only zeros and ones which requires only adders, thus avoiding the need for multiplication and shift operations. The new class of transform requires only 12 additions, which highly reduces the computational complexity and achieves a performance in image compression that is comparable to that of the existing approximated DCT. Another important aspect of the proposed transform is that it provides an efficient area and power optimization while implementing in hardware. To ensure the versatility of the proposal and to further evaluate the performance and correctness of the structure in terms of speed, area, and power consumption, the model is implemented on Xilinx Virtex 7 field programmable gate array (FPGA) device and synthesized with Cadence® RTL Compiler® using UMC 90 nm standard cell library. The analysis obtained from the implementation indicates that the proposed structure is superior to the existing approximation techniques with a 30% reduction in power and 12% reduction in area.

  10. Global exponential periodicity and stability of discrete-time complex-valued recurrent neural networks with time-delays.

    PubMed

    Hu, Jin; Wang, Jun

    2015-06-01

    In recent years, complex-valued recurrent neural networks have been developed and analysed in-depth in view of that they have good modelling performance for some applications involving complex-valued elements. In implementing continuous-time dynamical systems for simulation or computational purposes, it is quite necessary to utilize a discrete-time model which is an analogue of the continuous-time system. In this paper, we analyse a discrete-time complex-valued recurrent neural network model and obtain the sufficient conditions on its global exponential periodicity and exponential stability. Simulation results of several numerical examples are delineated to illustrate the theoretical results and an application on associative memory is also given. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Discrete cloud structure on Neptune

    NASA Technical Reports Server (NTRS)

    Hammel, H. B.

    1989-01-01

    Recent CCD imaging data for the discrete cloud structure of Neptune shows that while cloud features at CH4-band wavelengths are manifest in the southern hemisphere, they have not been encountered in the northern hemisphere since 1986. A literature search has shown the reflected CH4-band light from the planet to have come from a single discrete feature at least twice in the last 10 years. Disk-integrated photometry derived from the imaging has demonstrated that a bright cloud feature was responsible for the observed 8900 A diurnal variation in 1986 and 1987.

  12. Chaotification of complex networks with impulsive control.

    PubMed

    Guan, Zhi-Hong; Liu, Feng; Li, Juan; Wang, Yan-Wu

    2012-06-01

    This paper investigates the chaotification problem of complex dynamical networks (CDN) with impulsive control. Both the discrete and continuous cases are studied. The method is presented to drive all states of every node in CDN to chaos. The proposed impulsive control strategy is effective for both the originally stable and unstable CDN. The upper bound of the impulse intervals for originally stable networks is derived. Finally, the effectiveness of the theoretical results is verified by numerical examples.

  13. Structure and Randomness of Continuous-Time, Discrete-Event Processes

    NASA Astrophysics Data System (ADS)

    Marzen, Sarah E.; Crutchfield, James P.

    2017-10-01

    Loosely speaking, the Shannon entropy rate is used to gauge a stochastic process' intrinsic randomness; the statistical complexity gives the cost of predicting the process. We calculate, for the first time, the entropy rate and statistical complexity of stochastic processes generated by finite unifilar hidden semi-Markov models—memoryful, state-dependent versions of renewal processes. Calculating these quantities requires introducing novel mathematical objects (ɛ -machines of hidden semi-Markov processes) and new information-theoretic methods to stochastic processes.

  14. Fourier Deconvolution Methods for Resolution Enhancement in Continuous-Wave EPR Spectroscopy.

    PubMed

    Reed, George H; Poyner, Russell R

    2015-01-01

    An overview of resolution enhancement of conventional, field-swept, continuous-wave electron paramagnetic resonance spectra using Fourier transform-based deconvolution methods is presented. Basic steps that are involved in resolution enhancement of calculated spectra using an implementation based on complex discrete Fourier transform algorithms are illustrated. Advantages and limitations of the method are discussed. An application to an experimentally obtained spectrum is provided to illustrate the power of the method for resolving overlapped transitions. © 2015 Elsevier Inc. All rights reserved.

  15. Imaging agents for in vivo magnetic resonance and scintigraphic imaging

    DOEpatents

    Engelstad, B.L.; Raymond, K.N.; Huberty, J.P.; White, D.L.

    1991-04-23

    Methods are provided for in vivo magnetic resonance imaging and/or scintigraphic imaging of a subject using chelated transition metal and lanthanide metal complexes. Novel ligands for these complexes are provided. No Drawings

  16. Identifying Degenerative Brain Disease Using Rough Set Classifier Based on Wavelet Packet Method.

    PubMed

    Cheng, Ching-Hsue; Liu, Wei-Xiang

    2018-05-28

    Population aging has become a worldwide phenomenon, which causes many serious problems. The medical issues related to degenerative brain disease have gradually become a concern. Magnetic Resonance Imaging is one of the most advanced methods for medical imaging and is especially suitable for brain scans. From the literature, although the automatic segmentation method is less laborious and time-consuming, it is restricted in several specific types of images. In addition, hybrid techniques segmentation improves the shortcomings of the single segmentation method. Therefore, this study proposed a hybrid segmentation combined with rough set classifier and wavelet packet method to identify degenerative brain disease. The proposed method is a three-stage image process method to enhance accuracy of brain disease classification. In the first stage, this study used the proposed hybrid segmentation algorithms to segment the brain ROI (region of interest). In the second stage, wavelet packet was used to conduct the image decomposition and calculate the feature values. In the final stage, the rough set classifier was utilized to identify the degenerative brain disease. In verification and comparison, two experiments were employed to verify the effectiveness of the proposed method and compare with the TV-seg (total variation segmentation) algorithm, Discrete Cosine Transform, and the listing classifiers. Overall, the results indicated that the proposed method outperforms the listing methods.

  17. Hybrid Pixel-Based Method for Cardiac Ultrasound Fusion Based on Integration of PCA and DWT

    PubMed Central

    Sulaiman, Puteri Suhaiza; Wirza, Rahmita; Dimon, Mohd Zamrin; Khalid, Fatimah; Moosavi Tayebi, Rohollah

    2015-01-01

    Medical image fusion is the procedure of combining several images from one or multiple imaging modalities. In spite of numerous attempts in direction of automation ventricle segmentation and tracking in echocardiography, due to low quality images with missing anatomical details or speckle noises and restricted field of view, this problem is a challenging task. This paper presents a fusion method which particularly intends to increase the segment-ability of echocardiography features such as endocardial and improving the image contrast. In addition, it tries to expand the field of view, decreasing impact of noise and artifacts and enhancing the signal to noise ratio of the echo images. The proposed algorithm weights the image information regarding an integration feature between all the overlapping images, by using a combination of principal component analysis and discrete wavelet transform. For evaluation, a comparison has been done between results of some well-known techniques and the proposed method. Also, different metrics are implemented to evaluate the performance of proposed algorithm. It has been concluded that the presented pixel-based method based on the integration of PCA and DWT has the best result for the segment-ability of cardiac ultrasound images and better performance in all metrics. PMID:26089965

  18. An Imaging Model Incorporating Ultrasonic Transducer Properties for Three-Dimensional Optoacoustic Tomography

    PubMed Central

    Wang, Kun; Ermilov, Sergey A.; Su, Richard; Brecht, Hans-Peter; Oraevsky, Alexander A.; Anastasio, Mark A.

    2010-01-01

    Optoacoustic Tomography (OAT) is a hybrid imaging modality that combines the advantages of optical and ultrasound imaging. Most existing reconstruction algorithms for OAT assume that the ultrasound transducers employed to record the measurement data are point-like. When transducers with large detecting areas and/or compact measurement geometries are utilized, this assumption can result in conspicuous image blurring and distortions in the reconstructed images. In this work, a new OAT imaging model that incorporates the spatial and temporal responses of an ultrasound transducer is introduced. A discrete form of the imaging model is implemented and its numerical properties are investigated. We demonstrate that use of the imaging model in an iterative reconstruction method can improve the spatial resolution of the optoacoustic images as compared to those reconstructed assuming point-like ultrasound transducers. PMID:20813634

  19. Fabrication and application of heterogeneous printed mouse phantoms for whole animal optical imaging

    PubMed Central

    Bentz, Brian Z.; Chavan, Anmol V.; Lin, Dergan; Tsai, Esther H. R.; Webb, Kevin J.

    2017-01-01

    This work demonstrates the usefulness of 3D printing for optical imaging applications. Progress in developing optical imaging for biomedical applications requires customizable and often complex objects for testing and evaluation. There is therefore high demand for what have become known as tissue-simulating “phantoms.” We present a new optical phantom fabricated using inexpensive 3D printing methods with multiple materials, allowing for the placement of complex inhomogeneities in complex or anatomically realistic geometries, as opposed to previous phantoms, which were limited to simple shapes formed by molds or machining. We use diffuse optical imaging to reconstruct optical parameters in 3D space within a printed mouse to show the applicability of the phantoms for developing whole animal optical imaging methods. This phantom fabrication approach is versatile, can be applied to optical imaging methods besides diffusive imaging, and can be used in the calibration of live animal imaging data. PMID:26835763

  20. Search by photo methodology for signature properties assessment by human observers

    NASA Astrophysics Data System (ADS)

    Selj, Gorm K.; Heinrich, Daniela H.

    2015-05-01

    Reliable, low-cost and simple methods for assessment of signature properties for military purposes are very important. In this paper we present such an approach that uses human observers in a search by photo assessment of signature properties of generic test targets. The method was carried out by logging a large number of detection times of targets recorded in relevant terrain backgrounds. The detection times were harvested by using human observers searching for targets in scene images shown by a high definition pc screen. All targets were identically located in each "search image", allowing relative comparisons (and not just rank by order) of targets. To avoid biased detections, each observer only searched for one target per scene. Statistical analyses were carried out for the detection times data. Analysis of variance was chosen if detection times distribution associated with all targets satisfied normality, and non-parametric tests, such as Wilcoxon's rank test, if otherwise. The new methodology allows assessment of signature properties in a reproducible, rapid and reliable setting. Such assessments are very complex as they must sort out what is of relevance in a signature test, but not loose information of value. We believe that choosing detection times as the primary variable for a comparison of signature properties, allows a careful and necessary inspection of observer data as the variable is continuous rather than discrete. Our method thus stands in opposition to approaches based on detections by subsequent, stepwise reductions in distance to target, or based on probability of detection.

  1. Comparison of Methods for Determining Boundary Layer Edge Conditions for Transition Correlations

    NASA Technical Reports Server (NTRS)

    Liechty, Derek S.; Berry, Scott A.; Hollis, Brian R.; Horvath, Thomas J.

    2003-01-01

    Data previously obtained for the X-33 in the NASA Langley Research Center 20-Inch Mach 6 Air Tunnel have been reanalyzed to compare methods for determining boundary layer edge conditions for use in transition correlations. The experimental results were previously obtained utilizing the phosphor thermography technique to monitor the status of the boundary layer downstream of discrete roughness elements via global heat transfer images of the X-33 windward surface. A boundary layer transition correlation was previously developed for this data set using boundary layer edge conditions calculated using an inviscid/integral boundary layer approach. An algorithm was written in the present study to extract boundary layer edge quantities from higher fidelity viscous computational fluid dynamic solutions to develop transition correlations that account for viscous effects on vehicles of arbitrary complexity. The boundary layer transition correlation developed for the X-33 from the viscous solutions are compared to the previous boundary layer transition correlations. It is shown that the boundary layer edge conditions calculated using an inviscid/integral boundary layer approach are significantly different than those extracted from viscous computational fluid dynamic solutions. The present results demonstrate the differences obtained in correlating transition data using different computational methods.

  2. Unstructured Cartesian refinement with sharp interface immersed boundary method for 3D unsteady incompressible flows

    NASA Astrophysics Data System (ADS)

    Angelidis, Dionysios; Chawdhary, Saurabh; Sotiropoulos, Fotis

    2016-11-01

    A novel numerical method is developed for solving the 3D, unsteady, incompressible Navier-Stokes equations on locally refined fully unstructured Cartesian grids in domains with arbitrarily complex immersed boundaries. Owing to the utilization of the fractional step method on an unstructured Cartesian hybrid staggered/non-staggered grid layout, flux mismatch and pressure discontinuity issues are avoided and the divergence free constraint is inherently satisfied to machine zero. Auxiliary/hanging nodes are used to facilitate the discretization of the governing equations. The second-order accuracy of the solver is ensured by using multi-dimension Lagrange interpolation operators and appropriate differencing schemes at the interface of regions with different levels of refinement. The sharp interface immersed boundary method is augmented with local near-boundary refinement to handle arbitrarily complex boundaries. The discrete momentum equation is solved with the matrix free Newton-Krylov method and the Krylov-subspace method is employed to solve the Poisson equation. The second-order accuracy of the proposed method on unstructured Cartesian grids is demonstrated by solving the Poisson equation with a known analytical solution. A number of three-dimensional laminar flow simulations of increasing complexity illustrate the ability of the method to handle flows across a range of Reynolds numbers and flow regimes. Laminar steady and unsteady flows past a sphere and the oblique vortex shedding from a circular cylinder mounted between two end walls demonstrate the accuracy, the efficiency and the smooth transition of scales and coherent structures across refinement levels. Large-eddy simulation (LES) past a miniature wind turbine rotor, parameterized using the actuator line approach, indicates the ability of the fully unstructured solver to simulate complex turbulent flows. Finally, a geometry resolving LES of turbulent flow past a complete hydrokinetic turbine illustrates the potential of the method to simulate turbulent flows past geometrically complex bodies on locally refined meshes. In all the cases, the results are found to be in very good agreement with published data and savings in computational resources are achieved.

  3. Multigrid finite element method in stress analysis of three-dimensional elastic bodies of heterogeneous structure

    NASA Astrophysics Data System (ADS)

    Matveev, A. D.

    2016-11-01

    To calculate the three-dimensional elastic body of heterogeneous structure under static loading, a method of multigrid finite element is provided, when implemented on the basis of algorithms of finite element method (FEM), using homogeneous and composite threedimensional multigrid finite elements (MFE). Peculiarities and differences of MFE from the currently available finite elements (FE) are to develop composite MFE (without increasing their dimensions), arbitrarily small basic partition of composite solids consisting of single-grid homogeneous FE of the first order can be used, i.e. in fact, to use micro approach in finite element form. These small partitions allow one to take into account in MFE, i.e. in the basic discrete models of composite solids, complex heterogeneous and microscopically inhomogeneous structure, shape, the complex nature of the loading and fixation and describe arbitrarily closely the stress and stain state by the equations of three-dimensional elastic theory without any additional simplifying hypotheses. When building the m grid FE, m of nested grids is used. The fine grid is generated by a basic partition of MFE, the other m —1 large grids are applied to reduce MFE dimensionality, when m is increased, MFE dimensionality becomes smaller. The procedures of developing MFE of rectangular parallelepiped, irregular shape, plate and beam types are given. MFE generate the small dimensional discrete models and numerical solutions with a high accuracy. An example of calculating the laminated plate, using three-dimensional 3-grid FE and the reference discrete model is given, with that having 2.2 milliards of FEM nodal unknowns.

  4. Discrete frequency infrared microspectroscopy and imaging with a tunable quantum cascade laser

    PubMed Central

    Kole, Matthew R.; Reddy, Rohith K.; Schulmerich, Matthew V.; Gelber, Matthew K.; Bhargava, Rohit

    2012-01-01

    Fourier-transform infrared imaging (FT-IR) is a well-established modality but requires the acquisition of a spectrum over a large bandwidth, even in cases where only a few spectral features may be of interest. Discrete frequency infrared (DF-IR) methods are now emerging in which a small number of measurements may provide all the analytical information needed. The DF-IR approach is enabled by the development of new sources integrating frequency selection, in particular of tunable, narrow-bandwidth sources with enough power at each wavelength to successfully make absorption measurements. Here, we describe a DF-IR imaging microscope that uses an external cavity quantum cascade laser (QCL) as a source. We present two configurations, one with an uncooled bolometer as a detector and another with a liquid nitrogen cooled Mercury Cadmium Telluride (MCT) detector and compare their performance to a commercial FT-IR imaging instrument. We examine the consequences of the coherent properties of the beam with respect to imaging and compare these observations to simulations. Additionally, we demonstrate that the use of a tunable laser source represents a distinct advantage over broadband sources when using a small aperture (narrower than the wavelength of light) to perform high-quality point mapping. The two advances highlight the potential application areas for these emerging sources in IR microscopy and imaging. PMID:23113653

  5. An Efficient Multiscale Finite-Element Method for Frequency-Domain Seismic Wave Propagation

    DOE PAGES

    Gao, Kai; Fu, Shubin; Chung, Eric T.

    2018-02-13

    The frequency-domain seismic-wave equation, that is, the Helmholtz equation, has many important applications in seismological studies, yet is very challenging to solve, particularly for large geological models. Iterative solvers, domain decomposition, or parallel strategies can partially alleviate the computational burden, but these approaches may still encounter nontrivial difficulties in complex geological models where a sufficiently fine mesh is required to represent the fine-scale heterogeneities. We develop a novel numerical method to solve the frequency-domain acoustic wave equation on the basis of the multiscale finite-element theory. We discretize a heterogeneous model with a coarse mesh and employ carefully constructed high-order multiscalemore » basis functions to form the basis space for the coarse mesh. Solved from medium- and frequency-dependent local problems, these multiscale basis functions can effectively capture themedium’s fine-scale heterogeneity and the source’s frequency information, leading to a discrete system matrix with a much smaller dimension compared with those from conventional methods.We then obtain an accurate solution to the acoustic Helmholtz equation by solving only a small linear system instead of a large linear system constructed on the fine mesh in conventional methods.We verify our new method using several models of complicated heterogeneities, and the results show that our new multiscale method can solve the Helmholtz equation in complex models with high accuracy and extremely low computational costs.« less

  6. An Efficient Multiscale Finite-Element Method for Frequency-Domain Seismic Wave Propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Kai; Fu, Shubin; Chung, Eric T.

    The frequency-domain seismic-wave equation, that is, the Helmholtz equation, has many important applications in seismological studies, yet is very challenging to solve, particularly for large geological models. Iterative solvers, domain decomposition, or parallel strategies can partially alleviate the computational burden, but these approaches may still encounter nontrivial difficulties in complex geological models where a sufficiently fine mesh is required to represent the fine-scale heterogeneities. We develop a novel numerical method to solve the frequency-domain acoustic wave equation on the basis of the multiscale finite-element theory. We discretize a heterogeneous model with a coarse mesh and employ carefully constructed high-order multiscalemore » basis functions to form the basis space for the coarse mesh. Solved from medium- and frequency-dependent local problems, these multiscale basis functions can effectively capture themedium’s fine-scale heterogeneity and the source’s frequency information, leading to a discrete system matrix with a much smaller dimension compared with those from conventional methods.We then obtain an accurate solution to the acoustic Helmholtz equation by solving only a small linear system instead of a large linear system constructed on the fine mesh in conventional methods.We verify our new method using several models of complicated heterogeneities, and the results show that our new multiscale method can solve the Helmholtz equation in complex models with high accuracy and extremely low computational costs.« less

  7. Discrete Roughness Effects on Shuttle Orbiter at Mach 6

    NASA Technical Reports Server (NTRS)

    Berry, Scott A.; Hamilton, H. Harris, II

    2002-01-01

    Discrete roughness boundary layer transition results on a Shuttle Orbiter model in the NASA Langley Research Center 20-Inch Mach 6 Air Tunnel have been reanalyzed with new boundary layer calculations to provide consistency for comparison to other published results. The experimental results were previously obtained utilizing the phosphor thermography system to monitor the status of the boundary layer via global heat transfer images of the Orbiter windward surface. The size and location of discrete roughness elements were systematically varied along the centerline of the 0.0075-scale model at an angle of attack of 40 deg and the boundary layer response recorded. Various correlative approaches were attempted, with the roughness transition correlations based on edge properties providing the most reliable results. When a consistent computational method is used to compute edge conditions, transition datasets for different configurations at several angles of attack have been shown to collapse to a well-behaved correlation.

  8. DART: a practical reconstruction algorithm for discrete tomography.

    PubMed

    Batenburg, Kees Joost; Sijbers, Jan

    2011-09-01

    In this paper, we present an iterative reconstruction algorithm for discrete tomography, called discrete algebraic reconstruction technique (DART). DART can be applied if the scanned object is known to consist of only a few different compositions, each corresponding to a constant gray value in the reconstruction. Prior knowledge of the gray values for each of the compositions is exploited to steer the current reconstruction towards a reconstruction that contains only these gray values. Based on experiments with both simulated CT data and experimental μCT data, it is shown that DART is capable of computing more accurate reconstructions from a small number of projection images, or from a small angular range, than alternative methods. It is also shown that DART can deal effectively with noisy projection data and that the algorithm is robust with respect to errors in the estimation of the gray values.

  9. Tracking quasi-stationary flow of weak fluorescent signals by adaptive multi-frame correlation.

    PubMed

    Ji, L; Danuser, G

    2005-12-01

    We have developed a novel cross-correlation technique to probe quasi-stationary flow of fluorescent signals in live cells at a spatial resolution that is close to single particle tracking. By correlating image blocks between pairs of consecutive frames and integrating their correlation scores over multiple frame pairs, uncertainty in identifying a globally significant maximum in the correlation score function has been greatly reduced as compared with conventional correlation-based tracking using the signal of only two consecutive frames. This approach proves robust and very effective in analysing images with a weak, noise-perturbed signal contrast where texture characteristics cannot be matched between only a pair of frames. It can also be applied to images that lack prominent features that could be utilized for particle tracking or feature-based template matching. Furthermore, owing to the integration of correlation scores over multiple frames, the method can handle signals with substantial frame-to-frame intensity variation where conventional correlation-based tracking fails. We tested the performance of the method by tracking polymer flow in actin and microtubule cytoskeleton structures labelled at various fluorophore densities providing imagery with a broad range of signal modulation and noise. In applications to fluorescent speckle microscopy (FSM), where the fluorophore density is sufficiently low to reveal patterns of discrete fluorescent marks referred to as speckles, we combined the multi-frame correlation approach proposed above with particle tracking. This hybrid approach allowed us to follow single speckles robustly in areas of high speckle density and fast flow, where previously published FSM analysis methods were unsuccessful. Thus, we can now probe cytoskeleton polymer dynamics in living cells at an entirely new level of complexity and with unprecedented detail.

  10. Towards a Geography of Emotional Analysis

    ERIC Educational Resources Information Center

    Otrel-Cass, Kathrin

    2016-01-01

    This article is a forum response to a research article on self-reporting methods when studying discrete emotions in science education environments. Studying emotions in natural settings is a difficult task because of the complexity of deciphering verbal and non-verbal communication. In my response I present three main points that build on insights…

  11. Linear prediction data extrapolation superresolution radar imaging

    NASA Astrophysics Data System (ADS)

    Zhu, Zhaoda; Ye, Zhenru; Wu, Xiaoqing

    1993-05-01

    Range resolution and cross-range resolution of range-doppler imaging radars are related to the effective bandwidth of transmitted signal and the angle through which the object rotates relatively to the radar line of sight (RLOS) during the coherent processing time, respectively. In this paper, linear prediction data extrapolation discrete Fourier transform (LPDEDFT) superresolution imaging method is investigated for the purpose of surpassing the limitation imposed by the conventional FFT range-doppler processing and improving the resolution capability of range-doppler imaging radar. The LPDEDFT superresolution imaging method, which is conceptually simple, consists of extrapolating observed data beyond the observation windows by means of linear prediction, and then performing the conventional IDFT of the extrapolated data. The live data of a metalized scale model B-52 aircraft mounted on a rotating platform in a microwave anechoic chamber and a flying Boeing-727 aircraft were processed. It is concluded that, compared to the conventional Fourier method, either higher resolution for the same effective bandwidth of transmitted signals and total rotation angle of the object or equal-quality images from smaller bandwidth and total angle may be obtained by LPDEDFT.

  12. Two-level image authentication by two-step phase-shifting interferometry and compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Xue; Meng, Xiangfeng; Yin, Yongkai; Yang, Xiulun; Wang, Yurong; Li, Xianye; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi

    2018-01-01

    A two-level image authentication method is proposed; the method is based on two-step phase-shifting interferometry, double random phase encoding, and compressive sensing (CS) theory, by which the certification image can be encoded into two interferograms. Through discrete wavelet transform (DWT), sparseness processing, Arnold transform, and data compression, two compressed signals can be generated and delivered to two different participants of the authentication system. Only the participant who possesses the first compressed signal attempts to pass the low-level authentication. The application of Orthogonal Match Pursuit CS algorithm reconstruction, inverse Arnold transform, inverse DWT, two-step phase-shifting wavefront reconstruction, and inverse Fresnel transform can result in the output of a remarkable peak in the central location of the nonlinear correlation coefficient distributions of the recovered image and the standard certification image. Then, the other participant, who possesses the second compressed signal, is authorized to carry out the high-level authentication. Therefore, both compressed signals are collected to reconstruct the original meaningful certification image with a high correlation coefficient. Theoretical analysis and numerical simulations verify the feasibility of the proposed method.

  13. A first-order Green's function approach to supersonic oscillatory flow: A mixed analytic and numeric treatment

    NASA Technical Reports Server (NTRS)

    Freedman, M. I.; Sipcic, S.; Tseng, K.

    1985-01-01

    A frequency domain Green's Function Method for unsteady supersonic potential flow around complex aircraft configurations is presented. The focus is on the supersonic range wherein the linear potential flow assumption is valid. In this range the effects of the nonlinear terms in the unsteady supersonic compressible velocity potential equation are negligible and therefore these terms will be omitted. The Green's function method is employed in order to convert the potential flow differential equation into an integral one. This integral equation is then discretized, through standard finite element technique, to yield a linear algebraic system of equations relating the unknown potential to its prescribed co-normalwash (boundary condition) on the surface of the aircraft. The arbitrary complex aircraft configuration (e.g., finite-thickness wing, wing-body-tail) is discretized into hyperboloidal (twisted quadrilateral) panels. The potential and co-normalwash are assumed to vary linearly within each panel. The long range goal is to develop a comprehensive theory for unsteady supersonic potential aerodynamic which is capable of yielding accurate results even in the low supersonic (i.e., high transonic) range.

  14. The short time Fourier transform and local signals

    NASA Astrophysics Data System (ADS)

    Okumura, Shuhei

    In this thesis, I examine the theoretical properties of the short time discrete Fourier transform (STFT). The STFT is obtained by applying the Fourier transform by a fixed-sized, moving window to input series. We move the window by one time point at a time, so we have overlapping windows. I present several theoretical properties of the STFT, applied to various types of complex-valued, univariate time series inputs, and their outputs in closed forms. In particular, just like the discrete Fourier transform, the STFT's modulus time series takes large positive values when the input is a periodic signal. One main point is that a white noise time series input results in the STFT output being a complex-valued stationary time series and we can derive the time and time-frequency dependency structure such as the cross-covariance functions. Our primary focus is the detection of local periodic signals. I present a method to detect local signals by computing the probability that the squared modulus STFT time series has consecutive large values exceeding some threshold after one exceeding observation following one observation less than the threshold. We discuss a method to reduce the computation of such probabilities by the Box-Cox transformation and the delta method, and show that it works well in comparison to the Monte Carlo simulation method.

  15. An upwind method for the solution of the 3D Euler and Navier-Stokes equations on adaptively refined meshes

    NASA Astrophysics Data System (ADS)

    Aftosmis, Michael J.

    1992-10-01

    A new node based upwind scheme for the solution of the 3D Navier-Stokes equations on adaptively refined meshes is presented. The method uses a second-order upwind TVD scheme to integrate the convective terms, and discretizes the viscous terms with a new compact central difference technique. Grid adaptation is achieved through directional division of hexahedral cells in response to evolving features as the solution converges. The method is advanced in time with a multistage Runge-Kutta time stepping scheme. Two- and three-dimensional examples establish the accuracy of the inviscid and viscous discretization. These investigations highlight the ability of the method to produce crisp shocks, while accurately and economically resolving viscous layers. The representation of these and other structures is shown to be comparable to that obtained by structured methods. Further 3D examples demonstrate the ability of the adaptive algorithm to effectively locate and resolve multiple scale features in complex 3D flows with many interacting, viscous, and inviscid structures.

  16. The FBI compression standard for digitized fingerprint images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brislawn, C.M.; Bradley, J.N.; Onyshczak, R.J.

    1996-10-01

    The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the currentmore » status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.« less

  17. FBI compression standard for digitized fingerprint images

    NASA Astrophysics Data System (ADS)

    Brislawn, Christopher M.; Bradley, Jonathan N.; Onyshczak, Remigius J.; Hopper, Thomas

    1996-11-01

    The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.

  18. Integrating Reflection Seismic, Gravity and Magnetic Data to Reveal the Structure of Crystalline Basement: Implications for Understanding Rift Development

    NASA Astrophysics Data System (ADS)

    Lenhart, Antje; Jackson, Christopher A.-L.; Bell, Rebecca E.; Duffy, Oliver B.; Fossen, Haakon; Gawthorpe, Robert L.

    2016-04-01

    Numerous rifts form above crystalline basement containing pervasive faults and shear zones. However, the compositional and mechanical heterogeneity within crystalline basement and the geometry and kinematics of discrete and pervasive basement fabrics are poorly understood. Furthermore, the interpretation of intra-crustal structures beneath sedimentary basins is often complicated by limitations in the depth of conventional seismic imaging, the commonly acoustically transparent nature of basement, limited well penetrations, and complex overprinting of multiple tectonic events. Yet, a detailed knowledge of the structural and lithological complexity of crystalline basement rocks is crucial to improve our understanding of how rifts evolve. Potential field methods are a powerful but perhaps underutilised regional tool that can decrease interpretational uncertainty based solely on seismic reflection data. We use petrophysical data, high-resolution 3D reflection seismic volumes, gridded gravity and magnetic data, and 2D gravity and magnetic modelling to constrain the structure of crystalline basement offshore western Norway. Intra-basement structures are well-imaged on seismic data due to relatively shallow burial of the basement beneath a thin (<3.5 km) sedimentary cover. Variations in basement composition were interpreted from detailed seismic facies analysis and mapping of discrete intra-basement reflections. A variety of data filtering and isolation techniques were applied to the original gravity and magnetic data in order to enhance small-scale field variations, to accentuate formation boundaries and discrete linear trends, and to isolate shallow and deep crustal anomalies. In addition, 2D gravity and magnetic data modelling was used to verify the seismic interpretation and to further constrain the configuration of the upper and lower crust. Our analysis shows that the basement offshore western Norway is predominantly composed of Caledonian allochthonous nappes overlying large-scale anticlines of Proterozoic rocks of the Western Gneiss Region. Major Devonian extensional brittle faults, detachments and shear zones transect those tectono-stratigraphic units. Results from structural analysis of enhanced gravity and magnetic data indicate the presence of distinct intra-basement bodies and structural lineaments at different scales and depth levels which correlate with our seismic data interpretation and can be linked to their onshore counterparts exposed on mainland Norway. 2D forward models of gravity and magnetic data further support our interpretation and quantitatively constrain variations in magnetic and density properties of principal basement units. We conclude that: i) enhanced gravity and magnetic data are a powerful tool to constrain the geometry of individual intra-basement bodies and to detect structural lineaments not imaged in seismic data; ii) insights from this study can be used to evaluate the role of pre-existing basement structures on the evolution of rift basins; and iii) the integration of a range of geophysical datasets is crucial to improve our understanding of the deep subsurface.

  19. A Least-Squares Finite Element Method for Electromagnetic Scattering Problems

    NASA Technical Reports Server (NTRS)

    Wu, Jie; Jiang, Bo-nan

    1996-01-01

    The least-squares finite element method (LSFEM) is applied to electromagnetic scattering and radar cross section (RCS) calculations. In contrast to most existing numerical approaches, in which divergence-free constraints are omitted, the LSFF-M directly incorporates two divergence equations in the discretization process. The importance of including the divergence equations is demonstrated by showing that otherwise spurious solutions with large divergence occur near the scatterers. The LSFEM is based on unstructured grids and possesses full flexibility in handling complex geometry and local refinement Moreover, the LSFEM does not require any special handling, such as upwinding, staggered grids, artificial dissipation, flux-differencing, etc. Implicit time discretization is used and the scheme is unconditionally stable. By using a matrix-free iterative method, the computational cost and memory requirement for the present scheme is competitive with other approaches. The accuracy of the LSFEM is verified by several benchmark test problems.

  20. A review of techniques for visualising soft tissue microstructure deformation and quantifying strain Ex Vivo.

    PubMed

    Disney, C M; Lee, P D; Hoyland, J A; Sherratt, M J; Bay, B K

    2018-04-14

    Many biological tissues have a complex hierarchical structure allowing them to function under demanding physiological loading conditions. Structural changes caused by ageing or disease can lead to loss of mechanical function. Therefore, it is necessary to characterise tissue structure to understand normal tissue function and the progression of disease. Ideally intact native tissues should be imaged in 3D and under physiological loading conditions. The current published in situ imaging methodologies demonstrate a compromise between imaging limitations and maintaining the samples native mechanical function. This review gives an overview of in situ imaging techniques used to visualise microstructural deformation of soft tissue, including three case studies of different tissues (tendon, intervertebral disc and artery). Some of the imaging techniques restricted analysis to observational mechanics or discrete strain measurement from invasive markers. Full-field local surface strain measurement has been achieved using digital image correlation. Volumetric strain fields have successfully been quantified from in situ X-ray microtomography (micro-CT) studies of bone using digital volume correlation but not in soft tissue due to low X-ray transmission contrast. With the latest developments in micro-CT showing in-line phase contrast capability to resolve native soft tissue microstructure, there is potential for future soft tissue mechanics research where 3D local strain can be quantified. These methods will provide information on the local 3D micromechanical environment experienced by cells in healthy, aged and diseased tissues. It is hoped that future applications of in situ imaging techniques will impact positively on the design and testing of potential tissue replacements or regenerative therapies. © 2018 The Authors Journal of Microscopy © 2018 Royal Microscopical Society.

  1. Subspace-based optimization method for inverse scattering problems with an inhomogeneous background medium

    NASA Astrophysics Data System (ADS)

    Chen, Xudong

    2010-07-01

    This paper proposes a version of the subspace-based optimization method to solve the inverse scattering problem with an inhomogeneous background medium where the known inhomogeneities are bounded in a finite domain. Although the background Green's function at each discrete point in the computational domain is not directly available in an inhomogeneous background scenario, the paper uses the finite element method to simultaneously obtain the Green's function at all discrete points. The essence of the subspace-based optimization method is that part of the contrast source is determined from the spectrum analysis without using any optimization, whereas the orthogonally complementary part is determined by solving a lower dimension optimization problem. This feature significantly speeds up the convergence of the algorithm and at the same time makes it robust against noise. Numerical simulations illustrate the efficacy of the proposed algorithm. The algorithm presented in this paper finds wide applications in nondestructive evaluation, such as through-wall imaging.

  2. Color calibration and fusion of lens-free and mobile-phone microscopy images for high-resolution and accurate color reproduction

    NASA Astrophysics Data System (ADS)

    Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan

    2016-06-01

    Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed “digital color fusion microscopy” (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available.

  3. Color calibration and fusion of lens-free and mobile-phone microscopy images for high-resolution and accurate color reproduction

    PubMed Central

    Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan

    2016-01-01

    Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed “digital color fusion microscopy” (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available. PMID:27283459

  4. Color calibration and fusion of lens-free and mobile-phone microscopy images for high-resolution and accurate color reproduction.

    PubMed

    Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan

    2016-06-10

    Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed "digital color fusion microscopy" (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available.

  5. A discrete mechanics approach to dislocation dynamics in BCC crystals

    NASA Astrophysics Data System (ADS)

    Ramasubramaniam, A.; Ariza, M. P.; Ortiz, M.

    2007-03-01

    A discrete mechanics approach to modeling the dynamics of dislocations in BCC single crystals is presented. Ideas are borrowed from discrete differential calculus and algebraic topology and suitably adapted to crystal lattices. In particular, the extension of a crystal lattice to a CW complex allows for convenient manipulation of forms and fields defined over the crystal. Dislocations are treated within the theory as energy-minimizing structures that lead to locally lattice-invariant but globally incompatible eigendeformations. The discrete nature of the theory eliminates the need for regularization of the core singularity and inherently allows for dislocation reactions and complicated topological transitions. The quantization of slip to integer multiples of the Burgers' vector leads to a large integer optimization problem. A novel approach to solving this NP-hard problem based on considerations of metastability is proposed. A numerical example that applies the method to study the emanation of dislocation loops from a point source of dilatation in a large BCC crystal is presented. The structure and energetics of BCC screw dislocation cores, as obtained via the present formulation, are also considered and shown to be in good agreement with available atomistic studies. The method thus provides a realistic avenue for mesoscale simulations of dislocation based crystal plasticity with fully atomistic resolution.

  6. Fast quantum nD Fourier and radon transforms

    NASA Astrophysics Data System (ADS)

    Labunets, Valeri G.; Labunets-Rundblad, Ekaterina V.; Astola, Jaakko T.

    2001-07-01

    Fast Classical and quantum algorithms are introduced for a wide class of non-separable nD discrete unitary K- transforms(DKT)KNn. They require a number of 1D DKT Kn smaller than in the Cooley-Tukey radix-p FFT-type approach. The method utilizes a decomposition of the nDK- transform into a product of original nD discrete Radon Transform and of a family parallel/independ 1DK-transforms. If the nDK-transform has a separable kernel, that again in this case our approach leads to decrease of multiplicative complexity by factor of n compared to the tow/column separable Cooley-Tukey p-radix approach.

  7. A review of the application and contribution of discrete choice experiments to inform human resources policy interventions

    PubMed Central

    Lagarde, Mylene; Blaauw, Duane

    2009-01-01

    Although the factors influencing the shortage and maldistribution of health workers have been well-documented by cross-sectional surveys, there is less evidence on the relative determinants of health workers' job choices, or on the effects of policies designed to address these human resources problems. Recently, a few studies have adopted an innovative approach to studying the determinants of health workers' job preferences. In the absence of longitudinal datasets to analyse the decisions that health workers have actually made, authors have drawn on methods from marketing research and transport economics and used Discrete Choice Experiments to analyse stated preferences of health care providers for different job characteristics. We carried out a literature review of studies using discrete choice experiments to investigate human resources issues related to health workers, both in developed and developing countries. Several economic and health systems bibliographic databases were used, and contacts were made with practitioners in the field to identify published and grey literature. Ten studies were found that used discrete choice experiments to investigate the job preferences of health care providers. The use of discrete choice experiments techniques enabled researchers to determine the relative importance of different factors influencing health workers' choices. The studies showed that non-pecuniary incentives are significant determinants, sometimes more powerful than financial ones. The identified studies also emphasized the importance of investigating the preferences of different subgroups of health workers. Discrete choice experiments are a valuable tool for informing decision-makers on how to design strategies to address human resources problems. As they are relatively quick and cheap survey instruments, discrete choice experiments present various advantages for informing policies in developing countries, where longitudinal labour market data are seldom available. Yet they are complex research instruments requiring expertise in a number of different areas. Therefore it is essential that researchers also understand the potential limitations of discrete choice experiment methods. PMID:19630965

  8. Designing an Algorithm for Cancerous Tissue Segmentation Using Adaptive K-means Cluttering and Discrete Wavelet Transform

    PubMed Central

    Rezaee, Kh.; Haddadnia, J.

    2013-01-01

    Background: Breast cancer is currently one of the leading causes of death among women worldwide. The diagnosis and separation of cancerous tumors in mammographic images require accuracy, experience and time, and it has always posed itself as a major challenge to the radiologists and physicians. Objective: This paper proposes a new algorithm which draws on discrete wavelet transform and adaptive K-means techniques to transmute the medical images implement the tumor estimation and detect breast cancer tumors in mammograms in early stages. It also allows the rapid processing of the input data. Method: In the first step, after designing a filter, the discrete wavelet transform is applied to the input images and the approximate coefficients of scaling components are constructed. Then, the different parts of image are classified in continuous spectrum. In the next step, by using adaptive K-means algorithm for initializing and smart choice of clusters’ number, the appropriate threshold is selected. Finally, the suspicious cancerous mass is separated by implementing the image processing techniques. Results: We Received 120 mammographic images in LJPEG format, which had been scanned in Gray-Scale with 50 microns size, 3% noise and 20% INU from clinical data taken from two medical databases (mini-MIAS and DDSM). The proposed algorithm detected tumors at an acceptable level with an average accuracy of 92.32% and sensitivity of 90.24%. Also, the Kappa coefficient was approximately 0.85, which proved the suitable reliability of the system performance. Conclusion: The exact positioning of the cancerous tumors allows the radiologist to determine the stage of disease progression and suggest an appropriate treatment in accordance with the tumor growth. The low PPV and high NPV of the system is a warranty of the system and both clinical specialists and patients can trust its output. PMID:25505753

  9. Enclosure Transform for Interest Point Detection From Speckle Imagery.

    PubMed

    Yongjian Yu; Jue Wang

    2017-03-01

    We present a fast enclosure transform (ET) to localize complex objects of interest from speckle imagery. This approach explores the spatial confinement on regional features from a sparse image feature representation. Unrelated, broken ridge features surrounding an object are organized collaboratively, giving rise to the enclosureness of the object. Three enclosure likelihood measures are constructed, consisting of the enclosure force, potential energy, and encloser count. In the transform domain, the local maxima manifest the locations of objects of interest, for which only the intrinsic dimension is known a priori. The discrete ET algorithm is computationally efficient, being on the order of O(MN) using N measuring distances across an image of M ridge pixels. It involves easy and few parameter settings. We demonstrate and assess the performance of ET on the automatic detection of the prostate locations from supra-pubic ultrasound images. ET yields superior results in terms of positive detection rate, accuracy and coverage.

  10. Remote sensing image ship target detection method based on visual attention model

    NASA Astrophysics Data System (ADS)

    Sun, Yuejiao; Lei, Wuhu; Ren, Xiaodong

    2017-11-01

    The traditional methods of detecting ship targets in remote sensing images mostly use sliding window to search the whole image comprehensively. However, the target usually occupies only a small fraction of the image. This method has high computational complexity for large format visible image data. The bottom-up selective attention mechanism can selectively allocate computing resources according to visual stimuli, thus improving the computational efficiency and reducing the difficulty of analysis. Considering of that, a method of ship target detection in remote sensing images based on visual attention model was proposed in this paper. The experimental results show that the proposed method can reduce the computational complexity while improving the detection accuracy, and improve the detection efficiency of ship targets in remote sensing images.

  11. Macromolecular diffractive imaging using imperfect crystals

    PubMed Central

    Ayyer, Kartik; Yefanov, Oleksandr; Oberthür, Dominik; Roy-Chowdhury, Shatabdi; Galli, Lorenzo; Mariani, Valerio; Basu, Shibom; Coe, Jesse; Conrad, Chelsie E.; Fromme, Raimund; Schaffer, Alexander; Dörner, Katerina; James, Daniel; Kupitz, Christopher; Metz, Markus; Nelson, Garrett; Lourdu Xavier, Paulraj; Beyerlein, Kenneth R.; Schmidt, Marius; Sarrou, Iosifina; Spence, John C. H.; Weierstall, Uwe; White, Thomas A.; Yang, Jay-How; Zhao, Yun; Liang, Mengning; Aquila, Andrew; Hunter, Mark S.; Robinson, Joseph S.; Koglin, Jason E.; Boutet, Sébastien; Fromme, Petra; Barty, Anton; Chapman, Henry N.

    2016-01-01

    The three-dimensional structures of macromolecules and their complexes are predominantly elucidated by X-ray protein crystallography. A major limitation is access to high-quality crystals, to ensure X-ray diffraction extends to sufficiently large scattering angles and hence yields sufficiently high-resolution information that the crystal structure can be solved. The observation that crystals with shrunken unit-cell volumes and tighter macromolecular packing often produce higher-resolution Bragg peaks1,2 hints that crystallographic resolution for some macromolecules may be limited not by their heterogeneity but rather by a deviation of strict positional ordering of the crystalline lattice. Such displacements of molecules from the ideal lattice give rise to a continuous diffraction pattern, equal to the incoherent sum of diffraction from rigid single molecular complexes aligned along several discrete crystallographic orientations and hence with an increased information content3. Although such continuous diffraction patterns have long been observed—and are of interest as a source of information about the dynamics of proteins4 —they have not been used for structure determination. Here we show for crystals of the integral membrane protein complex photosystem II that lattice disorder increases the information content and the resolution of the diffraction pattern well beyond the 4.5 Å limit of measurable Bragg peaks, which allows us to directly phase5 the pattern. With the molecular envelope conventionally determined at 4.5 Å as a constraint, we then obtain a static image of the photosystem II dimer at 3.5 Å resolution. This result shows that continuous diffraction can be used to overcome long-supposed resolution limits of macromolecular crystallography, with a method that puts great value in commonly encountered imperfect crystals and opens up the possibility for model-free phasing6,7. PMID:26863980

  12. Complex noise suppression using a sparse representation and 3D filtering of images

    NASA Astrophysics Data System (ADS)

    Kravchenko, V. F.; Ponomaryov, V. I.; Pustovoit, V. I.; Palacios-Enriquez, A.

    2017-08-01

    A novel method for the filtering of images corrupted by complex noise composed of randomly distributed impulses and additive Gaussian noise has been substantiated for the first time. The method consists of three main stages: the detection and filtering of pixels corrupted by impulsive noise, the subsequent image processing to suppress the additive noise based on 3D filtering and a sparse representation of signals in a basis of wavelets, and the concluding image processing procedure to clean the final image of the errors emerged at the previous stages. A physical interpretation of the filtering method under complex noise conditions is given. A filtering block diagram has been developed in accordance with the novel approach. Simulations of the novel image filtering method have shown an advantage of the proposed filtering scheme in terms of generally recognized criteria, such as the structural similarity index measure and the peak signal-to-noise ratio, and when visually comparing the filtered images.

  13. Quaternion Based Thermal Condition Monitoring System

    NASA Astrophysics Data System (ADS)

    Wong, Wai Kit; Loo, Chu Kiong; Lim, Way Soong; Tan, Poi Ngee

    In this paper, we will propose a new and effective machine condition monitoring system using log-polar mapper, quaternion based thermal image correlator and max-product fuzzy neural network classifier. Two classification characteristics namely: peak to sidelobe ratio (PSR) and real to complex ratio of the discrete quaternion correlation output (p-value) are applied in the proposed machine condition monitoring system. Large PSR and p-value observe in a good match among correlation of the input thermal image with a particular reference image, while small PSR and p-value observe in a bad/not match among correlation of the input thermal image with a particular reference image. In simulation, we also discover that log-polar mapping actually help solving rotation and scaling invariant problems in quaternion based thermal image correlation. Beside that, log-polar mapping can have a two fold of data compression capability. Log-polar mapping can help smoother up the output correlation plane too, hence makes a better measurement way for PSR and p-values. Simulation results also show that the proposed system is an efficient machine condition monitoring system with accuracy more than 98%.

  14. [Development of a video image system for wireless capsule endoscopes based on DSP].

    PubMed

    Yang, Li; Peng, Chenglin; Wu, Huafeng; Zhao, Dechun; Zhang, Jinhua

    2008-02-01

    A video image recorder to record video picture for wireless capsule endoscopes was designed. TMS320C6211 DSP of Texas Instruments Inc. is the core processor of this system. Images are periodically acquired from Composite Video Broadcast Signal (CVBS) source and scaled by video decoder (SAA7114H). Video data is transported from high speed buffer First-in First-out (FIFO) to Digital Signal Processor (DSP) under the control of Complex Programmable Logic Device (CPLD). This paper adopts JPEG algorithm for image coding, and the compressed data in DSP was stored to Compact Flash (CF) card. TMS320C6211 DSP is mainly used for image compression and data transporting. Fast Discrete Cosine Transform (DCT) algorithm and fast coefficient quantization algorithm are used to accelerate operation speed of DSP and decrease the executing code. At the same time, proper address is assigned for each memory, which has different speed;the memory structure is also optimized. In addition, this system uses plenty of Extended Direct Memory Access (EDMA) to transport and process image data, which results in stable and high performance.

  15. Multiresolution multiscale active mask segmentation of fluorescence microscope images

    NASA Astrophysics Data System (ADS)

    Srinivasa, Gowri; Fickus, Matthew; Kovačević, Jelena

    2009-08-01

    We propose an active mask segmentation framework that combines the advantages of statistical modeling, smoothing, speed and flexibility offered by the traditional methods of region-growing, multiscale, multiresolution and active contours respectively. At the crux of this framework is a paradigm shift from evolving contours in the continuous domain to evolving multiple masks in the discrete domain. Thus, the active mask framework is particularly suited to segment digital images. We demonstrate the use of the framework in practice through the segmentation of punctate patterns in fluorescence microscope images. Experiments reveal that statistical modeling helps the multiple masks converge from a random initial configuration to a meaningful one. This obviates the need for an involved initialization procedure germane to most of the traditional methods used to segment fluorescence microscope images. While we provide the mathematical details of the functions used to segment fluorescence microscope images, this is only an instantiation of the active mask framework. We suggest some other instantiations of the framework to segment different types of images.

  16. A hybrid 3D SEM reconstruction method optimized for complex geologic material surfaces.

    PubMed

    Yan, Shang; Adegbule, Aderonke; Kibbey, Tohren C G

    2017-08-01

    Reconstruction methods are widely used to extract three-dimensional information from scanning electron microscope (SEM) images. This paper presents a new hybrid reconstruction method that combines stereoscopic reconstruction with shape-from-shading calculations to generate highly-detailed elevation maps from SEM image pairs. The method makes use of an imaged glass sphere to determine the quantitative relationship between observed intensity and angles between the beam and surface normal, and the detector and surface normal. Two specific equations are derived to make use of image intensity information in creating the final elevation map. The equations are used together, one making use of intensities in the two images, the other making use of intensities within a single image. The method is specifically designed for SEM images captured with a single secondary electron detector, and is optimized to capture maximum detail from complex natural surfaces. The method is illustrated with a complex structured abrasive material, and a rough natural sand grain. Results show that the method is capable of capturing details such as angular surface features, varying surface roughness, and surface striations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Comparison of spike-sorting algorithms for future hardware implementation.

    PubMed

    Gibson, Sarah; Judy, Jack W; Markovic, Dejan

    2008-01-01

    Applications such as brain-machine interfaces require hardware spike sorting in order to (1) obtain single-unit activity and (2) perform data reduction for wireless transmission of data. Such systems must be low-power, low-area, high-accuracy, automatic, and able to operate in real time. Several detection and feature extraction algorithms for spike sorting are described briefly and evaluated in terms of accuracy versus computational complexity. The nonlinear energy operator method is chosen as the optimal spike detection algorithm, being most robust over noise and relatively simple. The discrete derivatives method [1] is chosen as the optimal feature extraction method, maintaining high accuracy across SNRs with a complexity orders of magnitude less than that of traditional methods such as PCA.

  18. Image encryption using random sequence generated from generalized information domain

    NASA Astrophysics Data System (ADS)

    Xia-Yan, Zhang; Guo-Ji, Zhang; Xuan, Li; Ya-Zhou, Ren; Jie-Hua, Wu

    2016-05-01

    A novel image encryption method based on the random sequence generated from the generalized information domain and permutation-diffusion architecture is proposed. The random sequence is generated by reconstruction from the generalized information file and discrete trajectory extraction from the data stream. The trajectory address sequence is used to generate a P-box to shuffle the plain image while random sequences are treated as keystreams. A new factor called drift factor is employed to accelerate and enhance the performance of the random sequence generator. An initial value is introduced to make the encryption method an approximately one-time pad. Experimental results show that the random sequences pass the NIST statistical test with a high ratio and extensive analysis demonstrates that the new encryption scheme has superior security.

  19. Supervoxels for graph cuts-based deformable image registration using guided image filtering

    NASA Astrophysics Data System (ADS)

    Szmul, Adam; Papież, Bartłomiej W.; Hallack, Andre; Grau, Vicente; Schnabel, Julia A.

    2017-11-01

    We propose combining a supervoxel-based image representation with the concept of graph cuts as an efficient optimization technique for three-dimensional (3-D) deformable image registration. Due to the pixels/voxels-wise graph construction, the use of graph cuts in this context has been mainly limited to two-dimensional (2-D) applications. However, our work overcomes some of the previous limitations by posing the problem on a graph created by adjacent supervoxels, where the number of nodes in the graph is reduced from the number of voxels to the number of supervoxels. We demonstrate how a supervoxel image representation combined with graph cuts-based optimization can be applied to 3-D data. We further show that the application of a relaxed graph representation of the image, followed by guided image filtering over the estimated deformation field, allows us to model "sliding motion." Applying this method to lung image registration results in highly accurate image registration and anatomically plausible estimations of the deformations. Evaluation of our method on a publicly available computed tomography lung image dataset leads to the observation that our approach compares very favorably with state of the art methods in continuous and discrete image registration, achieving target registration error of 1.16 mm on average per landmark.

  20. Supervoxels for Graph Cuts-Based Deformable Image Registration Using Guided Image Filtering.

    PubMed

    Szmul, Adam; Papież, Bartłomiej W; Hallack, Andre; Grau, Vicente; Schnabel, Julia A

    2017-10-04

    In this work we propose to combine a supervoxel-based image representation with the concept of graph cuts as an efficient optimization technique for 3D deformable image registration. Due to the pixels/voxels-wise graph construction, the use of graph cuts in this context has been mainly limited to 2D applications. However, our work overcomes some of the previous limitations by posing the problem on a graph created by adjacent supervoxels, where the number of nodes in the graph is reduced from the number of voxels to the number of supervoxels. We demonstrate how a supervoxel image representation, combined with graph cuts-based optimization can be applied to 3D data. We further show that the application of a relaxed graph representation of the image, followed by guided image filtering over the estimated deformation field, allows us to model 'sliding motion'. Applying this method to lung image registration, results in highly accurate image registration and anatomically plausible estimations of the deformations. Evaluation of our method on a publicly available Computed Tomography lung image dataset (www.dir-lab.com) leads to the observation that our new approach compares very favorably with state-of-the-art in continuous and discrete image registration methods achieving Target Registration Error of 1.16mm on average per landmark.

  1. Supervoxels for Graph Cuts-Based Deformable Image Registration Using Guided Image Filtering

    PubMed Central

    Szmul, Adam; Papież, Bartłomiej W.; Hallack, Andre; Grau, Vicente; Schnabel, Julia A.

    2017-01-01

    In this work we propose to combine a supervoxel-based image representation with the concept of graph cuts as an efficient optimization technique for 3D deformable image registration. Due to the pixels/voxels-wise graph construction, the use of graph cuts in this context has been mainly limited to 2D applications. However, our work overcomes some of the previous limitations by posing the problem on a graph created by adjacent supervoxels, where the number of nodes in the graph is reduced from the number of voxels to the number of supervoxels. We demonstrate how a supervoxel image representation, combined with graph cuts-based optimization can be applied to 3D data. We further show that the application of a relaxed graph representation of the image, followed by guided image filtering over the estimated deformation field, allows us to model ‘sliding motion’. Applying this method to lung image registration, results in highly accurate image registration and anatomically plausible estimations of the deformations. Evaluation of our method on a publicly available Computed Tomography lung image dataset (www.dir-lab.com) leads to the observation that our new approach compares very favorably with state-of-the-art in continuous and discrete image registration methods achieving Target Registration Error of 1.16mm on average per landmark. PMID:29225433

  2. An image overall complexity evaluation method based on LSD line detection

    NASA Astrophysics Data System (ADS)

    Li, Jianan; Duan, Jin; Yang, Xu; Xiao, Bo

    2017-04-01

    In the artificial world, whether it is the city's traffic roads or engineering buildings contain a lot of linear features. Therefore, the research on the image complexity of linear information has become an important research direction in digital image processing field. This paper, by detecting the straight line information in the image and using the straight line as the parameter index, establishing the quantitative and accurate mathematics relationship. In this paper, we use LSD line detection algorithm which has good straight-line detection effect to detect the straight line, and divide the detected line by the expert consultation strategy. Then we use the neural network to carry on the weight training and get the weight coefficient of the index. The image complexity is calculated by the complexity calculation model. The experimental results show that the proposed method is effective. The number of straight lines in the image, the degree of dispersion, uniformity and so on will affect the complexity of the image.

  3. Mathematical Methods for Optical Physics and Engineering

    NASA Astrophysics Data System (ADS)

    Gbur, Gregory J.

    2011-01-01

    1. Vector algebra; 2. Vector calculus; 3. Vector calculus in curvilinear coordinate systems; 4. Matrices and linear algebra; 5. Advanced matrix techniques and tensors; 6. Distributions; 7. Infinite series; 8. Fourier series; 9. Complex analysis; 10. Advanced complex analysis; 11. Fourier transforms; 12. Other integral transforms; 13. Discrete transforms; 14. Ordinary differential equations; 15. Partial differential equations; 16. Bessel functions; 17. Legendre functions and spherical harmonics; 18. Orthogonal functions; 19. Green's functions; 20. The calculus of variations; 21. Asymptotic techniques; Appendices; References; Index.

  4. A projection gradient method for computing ground state of spin-2 Bose–Einstein condensates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hanquan, E-mail: hanquan.wang@gmail.com; Yunnan Tongchang Scientific Computing and Data Mining Research Center, Kunming, Yunnan Province, 650221

    In this paper, a projection gradient method is presented for computing ground state of spin-2 Bose–Einstein condensates (BEC). We first propose the general projection gradient method for solving energy functional minimization problem under multiple constraints, in which the energy functional takes real functions as independent variables. We next extend the method to solve a similar problem, where the energy functional now takes complex functions as independent variables. We finally employ the method into finding the ground state of spin-2 BEC. The key of our method is: by constructing continuous gradient flows (CGFs), the ground state of spin-2 BEC can bemore » computed as the steady state solution of such CGFs. We discretized the CGFs by a conservative finite difference method along with a proper way to deal with the nonlinear terms. We show that the numerical discretization is normalization and magnetization conservative and energy diminishing. Numerical results of the ground state and their energy of spin-2 BEC are reported to demonstrate the effectiveness of the numerical method.« less

  5. Adaptive feedback synchronisation of complex dynamical network with discrete-time communications and delayed nodes

    NASA Astrophysics Data System (ADS)

    Wang, Tong; Ding, Yongsheng; Zhang, Lei; Hao, Kuangrong

    2016-08-01

    This paper considered the synchronisation of continuous complex dynamical networks with discrete-time communications and delayed nodes. The nodes in the dynamical networks act in the continuous manner, while the communications between nodes are discrete-time; that is, they communicate with others only at discrete time instants. The communication intervals in communication period can be uncertain and variable. By using a piecewise Lyapunov-Krasovskii function to govern the characteristics of the discrete communication instants, we investigate the adaptive feedback synchronisation and a criterion is derived to guarantee the existence of the desired controllers. The globally exponential synchronisation can be achieved by the controllers under the updating laws. Finally, two numerical examples including globally coupled network and nearest-neighbour coupled networks are presented to demonstrate the validity and effectiveness of the proposed control scheme.

  6. Abstracting of suspected illegal land use in urban areas using case-based classification of remote sensing images

    NASA Astrophysics Data System (ADS)

    Chen, Fulong; Wang, Chao; Yang, Chengyun; Zhang, Hong; Wu, Fan; Lin, Wenjuan; Zhang, Bo

    2008-11-01

    This paper proposed a method that uses a case-based classification of remote sensing images and applied this method to abstract the information of suspected illegal land use in urban areas. Because of the discrete cases for imagery classification, the proposed method dealt with the oscillation of spectrum or backscatter within the same land use category, and it not only overcame the deficiency of maximum likelihood classification (the prior probability of land use could not be obtained) but also inherited the advantages of the knowledge-based classification system, such as artificial intelligence and automatic characteristics. Consequently, the proposed method could do the classifying better. Then the researchers used the object-oriented technique for shadow removal in highly dense city zones. With multi-temporal SPOT 5 images whose resolution was 2.5×2.5 meters, the researchers found that the method can abstract suspected illegal land use information in urban areas using post-classification comparison technique.

  7. Reinforcement-learning-based dual-control methodology for complex nonlinear discrete-time systems with application to spark engine EGR operation.

    PubMed

    Shih, Peter; Kaul, Brian C; Jagannathan, S; Drallmeier, James A

    2008-08-01

    A novel reinforcement-learning-based dual-control methodology adaptive neural network (NN) controller is developed to deliver a desired tracking performance for a class of complex feedback nonlinear discrete-time systems, which consists of a second-order nonlinear discrete-time system in nonstrict feedback form and an affine nonlinear discrete-time system, in the presence of bounded and unknown disturbances. For example, the exhaust gas recirculation (EGR) operation of a spark ignition (SI) engine is modeled by using such a complex nonlinear discrete-time system. A dual-controller approach is undertaken where primary adaptive critic NN controller is designed for the nonstrict feedback nonlinear discrete-time system whereas the secondary one for the affine nonlinear discrete-time system but the controllers together offer the desired performance. The primary adaptive critic NN controller includes an NN observer for estimating the states and output, an NN critic, and two action NNs for generating virtual control and actual control inputs for the nonstrict feedback nonlinear discrete-time system, whereas an additional critic NN and an action NN are included for the affine nonlinear discrete-time system by assuming the state availability. All NN weights adapt online towards minimization of a certain performance index, utilizing gradient-descent-based rule. Using Lyapunov theory, the uniformly ultimate boundedness (UUB) of the closed-loop tracking error, weight estimates, and observer estimates are shown. The adaptive critic NN controller performance is evaluated on an SI engine operating with high EGR levels where the controller objective is to reduce cyclic dispersion in heat release while minimizing fuel intake. Simulation and experimental results indicate that engine out emissions drop significantly at 20% EGR due to reduction in dispersion in heat release thus verifying the dual-control approach.

  8. Geometry segmentation of voxelized representations of heterogeneous microstructures using betweenness centrality

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yuan, Rui; Singh, Sudhanshu S.; Chawla, Nikhilesh

    2016-08-15

    We present a robust method for automating removal of “segregation artifacts” in segmented tomographic images of three-dimensional heterogeneous microstructures. The objective of this method is to accurately identify and separate discrete features in composite materials where limitations in imaging resolution lead to spurious connections near close contacts. The method utilizes betweenness centrality, a measure of the importance of a node in the connectivity of a graph network, to identify voxels that create artificial bridges between otherwise distinct geometric features. To facilitate automation of the algorithm, we develop a relative centrality metric to allow for the selection of a threshold criterionmore » that is not sensitive to inclusion size or shape. As a demonstration of the effectiveness of the algorithm, we report on the segmentation of a 3D reconstruction of a SiC particle reinforced aluminum alloy, imaged by X-ray synchrotron tomography.« less

  9. Matching CT and ultrasound data of the liver by landmark constrained image registration

    NASA Astrophysics Data System (ADS)

    Olesch, Janine; Papenberg, Nils; Lange, Thomas; Conrad, Matthias; Fischer, Bernd

    2009-02-01

    In navigated liver surgery the key challenge is the registration of pre-operative planing and intra-operative navigation data. Due to the patients individual anatomy the planning is based on segmented, pre-operative CT scans whereas ultrasound captures the actual intra-operative situation. In this paper we derive a novel method based on variational image registration methods and additional given anatomic landmarks. For the first time we embed the landmark information as inequality hard constraints and thereby allowing for inaccurately placed landmarks. The yielding optimization problem allows to ensure the accuracy of the landmark fit by simultaneous intensity based image registration. Following the discretize-then-optimize approach the overall problem is solved by a generalized Gauss-Newton-method. The upcoming linear system is attacked by the MinRes solver. We demonstrate the applicability of the new approach for clinical data which lead to convincing results.

  10. Global regularizing flows with topology preservation for active contours and polygons.

    PubMed

    Sundaramoorthi, Ganesh; Yezzi, Anthony

    2007-03-01

    Active contour and active polygon models have been used widely for image segmentation. In some applications, the topology of the object(s) to be detected from an image is known a priori, despite a complex unknown geometry, and it is important that the active contour or polygon maintain the desired topology. In this work, we construct a novel geometric flow that can be added to image-based evolutions of active contours and polygons in order to preserve the topology of the initial contour or polygon. We emphasize that, unlike other methods for topology preservation, the proposed geometric flow continually adjusts the geometry of the original evolution in a gradual and graceful manner so as to prevent a topology change long before the curve or polygon becomes close to topology change. The flow also serves as a global regularity term for the evolving contour, and has smoothness properties similar to curvature flow. These properties of gradually adjusting the original flow and global regularization prevent geometrical inaccuracies common with simple discrete topology preservation schemes. The proposed topology preserving geometric flow is the gradient flow arising from an energy that is based on electrostatic principles. The evolution of a single point on the contour depends on all other points of the contour, which is different from traditional curve evolutions in the computer vision literature.

  11. Gel-based coloration technique for the submillimeter-scale imaging of labile phosphorus in sediments and soils with diffusive gradients in thin films.

    PubMed

    Ding, Shiming; Wang, Yan; Xu, Di; Zhu, Chungang; Zhang, Chaosheng

    2013-07-16

    We report a highly promising technique for the high-resolution imaging of labile phosphorus (P) in sediments and soils in combination with the diffusive gradients in thin films (DGT). This technique was based on the surface coloration of the Zr-oxide binding gel using the conventional molybdenum blue method following the DGT uptake of P to this gel. The accumulated mass of the P in the gel was then measured according to the grayscale intensity on the gel surface using computer-imaging densitometry. A pretreatment of the gel in hot water (85 °C) for 5 d was required to immobilize the phosphate and the formed blue complex in the gel during the color development. The optimal time required for a complete color development was determined to be 45 min. The appropriate volume of the coloring reagent added was 200 times of that of the gel. A calibration equation was established under the optimized conditions, based on which a quantitative measurement of P was obtained when the concentration of P in solutions ranged from 0.04 mg L(-1) to 4.1 mg L(-1) for a 24 h deployment of typical DGT devices at 25 °C. The suitability of the coloration technique was well demonstrated by the observation of small, discrete spots with elevated P concentrations in a sediment profile.

  12. Modeling electrokinetic flows by consistent implicit incompressible smoothed particle hydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan, Wenxiao; Kim, Kyungjoo; Perego, Mauro

    2017-04-01

    We present an efficient implicit incompressible smoothed particle hydrodynamics (I2SPH) discretization of Navier-Stokes, Poisson-Boltzmann, and advection-diffusion equations subject to Dirichlet or Robin boundary conditions. It is applied to model various two and three dimensional electrokinetic flows in simple or complex geometries. The I2SPH's accuracy and convergence are examined via comparison with analytical solutions, grid-based numerical solutions, or empirical models. The new method provides a framework to explore broader applications of SPH in microfluidics and complex fluids with charged objects, such as colloids and biomolecules, in arbitrary complex geometries.

  13. A new simulation system of traffic flow based on cellular automata principle

    NASA Astrophysics Data System (ADS)

    Shan, Junru

    2017-05-01

    Traffic flow is a complex system of multi-behavior so it is difficult to give a specific mathematical equation to express it. With the rapid development of computer technology, it is an important method to study the complex traffic behavior by simulating the interaction mechanism between vehicles and reproduce complex traffic behavior. Using the preset of multiple operating rules, cellular automata is a kind of power system which has discrete time and space. It can be a good simulation of the real traffic process and a good way to solve the traffic problems.

  14. Ultrahigh-Resolution 3-Dimensional Seismic Imaging of Seeps from the Continental Slope of the Northern Gulf of Mexico: Subsurface, Seafloor and Into the Water Column

    NASA Astrophysics Data System (ADS)

    Brookshire, B. N., Jr.; Mattox, B. A.; Parish, A. E.; Burks, A. G.

    2016-02-01

    Utilizing recently advanced ultrahigh-resolution 3-dimensional (UHR3D) seismic tools we have imaged the seafloor geomorphology and associated subsurface aspects of seep related expulsion features along the continental slope of the northern Gulf of Mexico with unprecedented clarity and continuity. Over an area of approximately 400 km2, over 50 discrete features were identified and three general seafloor geomorphologies indicative of seep activity including mounds, depressions and bathymetrically complex features were quantitatively characterized. Moreover, areas of high seafloor reflectivity indicative of mineralization and areas of coherent seismic amplitude anomalies in the near-seafloor water column indicative of active gas expulsion were identified. In association with these features, shallow source gas accumulations and migration pathways based on salt related stratigraphic uplift and faulting were imaged. Shallow, bottom simulating reflectors (BSRs) interpreted to be free gas trapped under near seafloor gas hydrate accumulations were very clearly imaged.

  15. Image analysis for microelectronic retinal prosthesis.

    PubMed

    Hallum, L E; Cloherty, S L; Lovell, N H

    2008-01-01

    By way of extracellular, stimulating electrodes, a microelectronic retinal prosthesis aims to render discrete, luminous spots-so-called phosphenes-in the visual field, thereby providing a phosphene image (PI) as a rudimentary remediation of profound blindness. As part thereof, a digital camera, or some other photosensitive array, captures frames, frames are analyzed, and phosphenes are actuated accordingly by way of modulated charge injections. Here, we present a method that allows the assessment of image analysis schemes for integration with a prosthetic device, that is, the means of converting the captured image (high resolution) to modulated charge injections (low resolution). We use the mutual-information function to quantify the amount of information conveyed to the PI observer (device implantee), while accounting for the statistics of visual stimuli. We demonstrate an effective scheme involving overlapping, Gaussian kernels, and discuss extensions of the method to account for shortterm visual memory in observers, and their perceptual errors of omission and commission.

  16. Accurate complex scaling of three dimensional numerical potentials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cerioni, Alessandro; Genovese, Luigi; Duchemin, Ivan

    2013-05-28

    The complex scaling method, which consists in continuing spatial coordinates into the complex plane, is a well-established method that allows to compute resonant eigenfunctions of the time-independent Schroedinger operator. Whenever it is desirable to apply the complex scaling to investigate resonances in physical systems defined on numerical discrete grids, the most direct approach relies on the application of a similarity transformation to the original, unscaled Hamiltonian. We show that such an approach can be conveniently implemented in the Daubechies wavelet basis set, featuring a very promising level of generality, high accuracy, and no need for artificial convergence parameters. Complex scalingmore » of three dimensional numerical potentials can be efficiently and accurately performed. By carrying out an illustrative resonant state computation in the case of a one-dimensional model potential, we then show that our wavelet-based approach may disclose new exciting opportunities in the field of computational non-Hermitian quantum mechanics.« less

  17. Opening up closure. Semiotics across scales

    PubMed

    Lemke

    2000-01-01

    The dynamic emergence of new levels of organization in complex systems is related to the semiotic reorganization of discrete/continuous variety at the level below as continuous/discrete meaning for the level above. In this view both the semiotic and the dynamic closure of system levels is reopened to allow the development and evolution of greater complexity.

  18. Comparative analysis of two discretizations of Ricci curvature for complex networks.

    PubMed

    Samal, Areejit; Sreejith, R P; Gu, Jiao; Liu, Shiping; Saucan, Emil; Jost, Jürgen

    2018-06-05

    We have performed an empirical comparison of two distinct notions of discrete Ricci curvature for graphs or networks, namely, the Forman-Ricci curvature and Ollivier-Ricci curvature. Importantly, these two discretizations of the Ricci curvature were developed based on different properties of the classical smooth notion, and thus, the two notions shed light on different aspects of network structure and behavior. Nevertheless, our extensive computational analysis in a wide range of both model and real-world networks shows that the two discretizations of Ricci curvature are highly correlated in many networks. Moreover, we show that if one considers the augmented Forman-Ricci curvature which also accounts for the two-dimensional simplicial complexes arising in graphs, the observed correlation between the two discretizations is even higher, especially, in real networks. Besides the potential theoretical implications of these observations, the close relationship between the two discretizations has practical implications whereby Forman-Ricci curvature can be employed in place of Ollivier-Ricci curvature for faster computation in larger real-world networks whenever coarse analysis suffices.

  19. Opaque cloud detection

    DOEpatents

    Roskovensky, John K [Albuquerque, NM

    2009-01-20

    A method of detecting clouds in a digital image comprising, for an area of the digital image, determining a reflectance value in at least three discrete electromagnetic spectrum bands, computing a first ratio of one reflectance value minus another reflectance value and the same two values added together, computing a second ratio of one reflectance value and another reflectance value, choosing one of the reflectance values, and concluding that an opaque cloud exists in the area if the results of each of the two computing steps and the choosing step fall within three corresponding predetermined ranges.

  20. Flexible Automatic Discretization for Finite Differences: Eliminating the Human Factor

    NASA Astrophysics Data System (ADS)

    Pranger, Casper

    2017-04-01

    In the geophysical numerical modelling community, finite differences are (in part due to their small footprint) a popular spatial discretization method for PDEs in the regular-shaped continuum that is the earth. However, they rapidly become prone to programming mistakes when physics increase in complexity. To eliminate opportunities for human error, we have designed an automatic discretization algorithm using Wolfram Mathematica, in which the user supplies symbolic PDEs, the number of spatial dimensions, and a choice of symbolic boundary conditions, and the script transforms this information into matrix- and right-hand-side rules ready for use in a C++ code that will accept them. The symbolic PDEs are further used to automatically develop and perform manufactured solution benchmarks, ensuring at all stages physical fidelity while providing pragmatic targets for numerical accuracy. We find that this procedure greatly accelerates code development and provides a great deal of flexibility in ones choice of physics.

  1. Discrete shear-transformation-zone plasticity modeling of notched bars

    NASA Astrophysics Data System (ADS)

    Kondori, Babak; Amine Benzerga, A.; Needleman, Alan

    2018-02-01

    Plane strain tension analyses of un-notched and notched bars are carried out using discrete shear transformation zone plasticity. In this framework, the carriers of plastic deformation are shear transformation zones (STZs) which are modeled as Eshelby inclusions. Superposition is used to represent a boundary value problem solution in terms of discretely modeled Eshelby inclusions, given analytically for an infinite elastic medium, and an image solution that enforces the prescribed boundary conditions. The image problem is a standard linear elastic boundary value problem that is solved by the finite element method. Potential STZ activation sites are randomly distributed in the bars and constitutive relations are specified for their evolution. Results are presented for un-notched bars, for bars with blunt notches and for bars with sharp notches. The computed stress-strain curves are serrated with the magnitude of the associated stress-drops depending on bar size, notch acuity and STZ evolution. Cooperative deformation bands (shear bands) emerge upon straining and, in some cases, high stress levels occur within the bands. Effects of specimen geometry and size on the stress-strain curves are explored. Depending on STZ kinetics, notch strengthening, notch insensitivity or notch weakening are obtained. The analyses provide a rationale for some conflicting findings regarding notch effects on the mechanical response of metallic glasses.

  2. Complexity and chaos control in a discrete-time prey-predator model

    NASA Astrophysics Data System (ADS)

    Din, Qamar

    2017-08-01

    We investigate the complex behavior and chaos control in a discrete-time prey-predator model. Taking into account the Leslie-Gower prey-predator model, we propose a discrete-time prey-predator system with predator partially dependent on prey and investigate the boundedness, existence and uniqueness of positive equilibrium and bifurcation analysis of the system by using center manifold theorem and bifurcation theory. Various feedback control strategies are implemented for controlling the bifurcation and chaos in the system. Numerical simulations are provided to illustrate theoretical discussion.

  3. Volumetric Echocardiographic Particle Image Velocimetry (V-Echo-PIV)

    NASA Astrophysics Data System (ADS)

    Falahatpisheh, Ahmad; Kheradvar, Arash

    2015-11-01

    Measurement of 3D flow field inside the cardiac chambers has proven to be a challenging task. Current laser-based 3D PIV methods estimate the third component of the velocity rather than directly measuring it and also cannot be used to image the opaque heart chambers. Modern echocardiography systems are equipped with 3D probes that enable imaging the entire 3D opaque field. However, this feature has not yet been employed for 3D vector characterization of blood flow. For the first time, we introduce a method that generates velocity vector field in 4D based on volumetric echocardiographic images. By assuming the conservation of brightness in 3D, blood speckles are tracked. A hierarchical 3D PIV method is used to account for large particle displacement. The discretized brightness transport equation is solved in a least square sense in interrogation windows of size 163 voxels. We successfully validate the method in analytical and experimental cases. Volumetric echo data of a left ventricle is then processed in the systolic phase. The expected velocity fields were successfully predicted by V-Echo-PIV. In this work, we showed a method to image blood flow in 3D based on volumetric images of human heart using no contrast agent.

  4. High density array fabrication and readout method for a fiber optic biosensor

    DOEpatents

    Pinkel, Daniel; Gray, Joe

    1997-01-01

    The invention relates to the fabrication and use of biosensors comprising a plurality of optical fibers each fiber having attached to its "sensor end" biological "binding partners" (molecules that specifically bind other molecules to form a binding complex such as antibody-antigen, lectin-carbohydrate, nucleic acid-nucleic acid, biotin-avidin, etc.). The biosensor preferably bears two or more different species of biological binding partner. The sensor is fabricated by providing a plurality of groups of optical fibers. Each group is treated as a batch to attach a different species of biological binding partner to the sensor ends of the fibers comprising that bundle. Each fiber, or group of fibers within a bundle, may be uniquely identified so that the fibers, or group of fibers, when later combined in an array of different fibers, can be discretely addressed. Fibers or groups of fibers are then selected and discretely separated from different bundles. The discretely separated fibers are then combined at their sensor ends to produce a high density sensor array of fibers capable of assaying simultaneously the binding of components of a test sample to the various binding partners on the different fibers of the sensor array. The transmission ends of the optical fibers are then discretely addressed to detectors--such as a multiplicity of optical sensors. An optical signal, produced by binding of the binding partner to its substrate to form a binding complex, is conducted through the optical fiber or group of fibers to a detector for each discrete test. By examining the addressed transmission ends of fibers, or groups of fibers, the addressed transmission ends can transmit unique patterns assisting in rapid sample identification by the sensor.

  5. High density array fabrication and readout method for a fiber optic biosensor

    DOEpatents

    Pinkel, Daniel; Gray, Joe; Albertson, Donna G.

    2000-01-01

    The invention relates to the fabrication and use of biosensors comprising a plurality of optical fibers each fiber having attached to its "sensor end" biological "binding partners" (molecules that specifically bind other molecules to form a binding complex such as antibody-antigen, lectin-carbohydrate, nucleic acid-nucleic acid, biotin-avidin, etc.). The biosensor preferably bears two or more different species of biological binding partner. The sensor is fabricated by providing a plurality of groups of optical fibers. Each group is treated as a batch to attach a different species of biological binding partner to the sensor ends of the fibers comprising that bundle. Each fiber, or group of fibers within a bundle, may be uniquely identified so that the fibers, or group of fibers, when later combined in an array of different fibers, can be discretely addressed. Fibers or groups of fibers are then selected and discretely separated from different bundles. The discretely separated fibers are then combined at their sensor ends to produce a high density sensor array of fibers capable of assaying simultaneously the binding of components of a test sample to the various binding partners on the different fibers of the sensor array. The transmission ends of the optical fibers are then discretely addressed to detectors--such as a multiplicity of optical sensors. An optical signal, produced by binding of the binding partner to its substrate to form a binding complex, is conducted through the optical fiber or group of fibers to a detector for each discrete test. By examining the addressed transmission ends of fibers, or groups of fibers, the addressed transmission ends can transmit unique patterns assisting in rapid sample identification by the sensor.

  6. High density array fabrication and readout method for a fiber optic biosensor

    DOEpatents

    Pinkel, Daniel; Gray, Joe; Albertson, Donna G.

    2002-01-01

    The invention relates to the fabrication and use of biosensors comprising a plurality of optical fibers each fiber having attached to its "sensor end" biological "binding partners" (molecules that specifically bind other molecules to form a binding complex such as antibody-antigen, lectin-carbohydrate, nucleic acid-nucleic acid, biotin-avidin, etc.). The biosensor preferably bears two or more different species of biological binding partner. The sensor is fabricated by providing a plurality of groups of optical fibers. Each group is treated as a batch to attach a different species of biological binding partner to the sensor ends of the fibers comprising that bundle. Each fiber, or group of fibers within a bundle, may be uniquely identified so that the fibers, or group of fibers, when later combined in an array of different fibers, can be discretely addressed. Fibers or groups of fibers are then selected and discretely separated from different bundles. The discretely separated fibers are then combined at their sensor ends to produce a high density sensor array of fibers capable of assaying simultaneously the binding of components of a test sample to the various binding partners on the different fibers of the sensor array. The transmission ends of the optical fibers are then discretely addressed to detectors--such as a multiplicity of optical sensors. An optical signal, produced by binding of the binding partner to its substrate to form a binding complex, is conducted through the optical fiber or group of fibers to a detector for each discrete test. By examining the addressed transmission ends of fibers, or groups of fibers, the addressed transmission ends can transmit unique patterns assisting in rapid sample identification by the sensor.

  7. High density array fabrication and readout method for a fiber optic biosensor

    DOEpatents

    Pinkel, D.; Gray, J.

    1997-11-25

    The invention relates to the fabrication and use of biosensors comprising a plurality of optical fibers each fiber having attached to its ``sensor end`` biological ``binding partners`` (molecules that specifically bind other molecules to form a binding complex such as antibody-antigen, lectin-carbohydrate, nucleic acid-nucleic acid, biotin-avidin, etc.). The biosensor preferably bears two or more different species of biological binding partner. The sensor is fabricated by providing a plurality of groups of optical fibers. Each group is treated as a batch to attach a different species of biological binding partner to the sensor ends of the fibers comprising that bundle. Each fiber, or group of fibers within a bundle, may be uniquely identified so that the fibers, or group of fibers, when later combined in an array of different fibers, can be discretely addressed. Fibers or groups of fibers are then selected and discretely separated from different bundles. The discretely separated fibers are then combined at their sensor ends to produce a high density sensor array of fibers capable of assaying simultaneously the binding of components of a test sample to the various binding partners on the different fibers of the sensor array. The transmission ends of the optical fibers are then discretely addressed to detectors--such as a multiplicity of optical sensors. An optical signal, produced by binding of the binding partner to its substrate to form a binding complex, is conducted through the optical fiber or group of fibers to a detector for each discrete test. By examining the addressed transmission ends of fibers, or groups of fibers, the addressed transmission ends can transmit unique patterns assisting in rapid sample identification by the sensor. 9 figs.

  8. The Use of Interactive Raster Graphics in the Display and Manipulation of Multidimensional Data

    NASA Technical Reports Server (NTRS)

    Anderson, D. C.

    1981-01-01

    Techniques for the review, display, and manipulation of multidimensional data are developed and described. Multidimensional data is meant in this context to describe scalar data associated with a three dimensional geometry or otherwise too complex to be well represented by traditional graphs. Raster graphics techniques are used to display a shaded image of a three dimensional geometry. The use of color to represent scalar data associated with the geometries in shaded images is explored. Distinct hues are associated with discrete data ranges, thus emulating the traditional representation of data with isarithms, or lines of constant numerical value. Data ranges are alternatively associated with a continuous spectrum of hues to show subtler data trends. The application of raster graphics techniques to the display of bivariate functions is explored.

  9. Quantitative analysis of packed and compacted granular systems by x-ray microtomography

    NASA Astrophysics Data System (ADS)

    Fu, Xiaowei; Milroy, Georgina E.; Dutt, Meenakshi; Bentham, A. Craig; Hancock, Bruno C.; Elliott, James A.

    2005-04-01

    The packing and compaction of powders are general processes in pharmaceutical, food, ceramic and powder metallurgy industries. Understanding how particles pack in a confined space and how powders behave during compaction is crucial for producing high quality products. This paper outlines a new technique, based on modern desktop X-ray tomography and image processing, to quantitatively investigate the packing of particles in the process of powder compaction and provide great insights on how powder densify during powder compaction, which relate in terms of materials properties and processing conditions to tablet manufacture by compaction. A variety of powder systems were considered, which include glass, sugar, NaCl, with a typical particle size of 200-300 mm and binary mixtures of NaCl-Glass Spheres. The results are new and have been validated by SEM observation and numerical simulations using discrete element methods (DEM). The research demonstrates that XMT technique has the potential in further investigating of pharmaceutical processing and even verifying other physical models on complex packing.

  10. A Deep Penetration Problem Calculation Using AETIUS:An Easy Modeling Discrete Ordinates Transport Code UsIng Unstructured Tetrahedral Mesh, Shared Memory Parallel

    NASA Astrophysics Data System (ADS)

    KIM, Jong Woon; LEE, Young-Ouk

    2017-09-01

    As computing power gets better and better, computer codes that use a deterministic method seem to be less useful than those using the Monte Carlo method. In addition, users do not like to think about space, angles, and energy discretization for deterministic codes. However, a deterministic method is still powerful in that we can obtain a solution of the flux throughout the problem, particularly as when particles can barely penetrate, such as in a deep penetration problem with small detection volumes. Recently, a new state-of-the-art discrete-ordinates code, ATTILA, was developed and has been widely used in several applications. ATTILA provides the capabilities to solve geometrically complex 3-D transport problems by using an unstructured tetrahedral mesh. Since 2009, we have been developing our own code by benchmarking ATTILA. AETIUS is a discrete ordinates code that uses an unstructured tetrahedral mesh such as ATTILA. For pre- and post- processing, Gmsh is used to generate an unstructured tetrahedral mesh by importing a CAD file (*.step) and visualizing the calculation results of AETIUS. Using a CAD tool, the geometry can be modeled very easily. In this paper, we describe a brief overview of AETIUS and provide numerical results from both AETIUS and a Monte Carlo code, MCNP5, in a deep penetration problem with small detection volumes. The results demonstrate the effectiveness and efficiency of AETIUS for such calculations.

  11. DEM study on the interaction between wet cohesive granular materials and tools

    NASA Astrophysics Data System (ADS)

    Tsuji, Takuya; Matsui, Yu; Nakagawa, Yuta; Kadono, Yuuichi; Tanaka, Toshitsugu

    2013-06-01

    A model based on discrete element method has been developed for the interaction between wet cohesive granular materials and mechanical tools with complex geometry. To obtain realistic results, the motion of 52.5 million particles has been simulated and the formation of multiple shear bands during an excavation process by a bulldozer blade was observed.

  12. Hierarchical Recurrent Neural Hashing for Image Retrieval With Hierarchical Convolutional Features.

    PubMed

    Lu, Xiaoqiang; Chen, Yaxiong; Li, Xuelong

    Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep learning architectures can learn more effective image representation features. However, these methods only use semantic features to generate hash codes by shallow projection but ignore texture details. In this paper, we proposed a novel hashing method, namely hierarchical recurrent neural hashing (HRNH), to exploit hierarchical recurrent neural network to generate effective hash codes. There are three contributions of this paper. First, a deep hashing method is proposed to extensively exploit both spatial details and semantic information, in which, we leverage hierarchical convolutional features to construct image pyramid representation. Second, our proposed deep network can exploit directly convolutional feature maps as input to preserve the spatial structure of convolutional feature maps. Finally, we propose a new loss function that considers the quantization error of binarizing the continuous embeddings into the discrete binary codes, and simultaneously maintains the semantic similarity and balanceable property of hash codes. Experimental results on four widely used data sets demonstrate that the proposed HRNH can achieve superior performance over other state-of-the-art hashing methods.Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep learning architectures can learn more effective image representation features. However, these methods only use semantic features to generate hash codes by shallow projection but ignore texture details. In this paper, we proposed a novel hashing method, namely hierarchical recurrent neural hashing (HRNH), to exploit hierarchical recurrent neural network to generate effective hash codes. There are three contributions of this paper. First, a deep hashing method is proposed to extensively exploit both spatial details and semantic information, in which, we leverage hierarchical convolutional features to construct image pyramid representation. Second, our proposed deep network can exploit directly convolutional feature maps as input to preserve the spatial structure of convolutional feature maps. Finally, we propose a new loss function that considers the quantization error of binarizing the continuous embeddings into the discrete binary codes, and simultaneously maintains the semantic similarity and balanceable property of hash codes. Experimental results on four widely used data sets demonstrate that the proposed HRNH can achieve superior performance over other state-of-the-art hashing methods.

  13. Linear least-squares method for global luminescent oil film skin friction field analysis

    NASA Astrophysics Data System (ADS)

    Lee, Taekjin; Nonomura, Taku; Asai, Keisuke; Liu, Tianshu

    2018-06-01

    A data analysis method based on the linear least-squares (LLS) method was developed for the extraction of high-resolution skin friction fields from global luminescent oil film (GLOF) visualization images of a surface in an aerodynamic flow. In this method, the oil film thickness distribution and its spatiotemporal development are measured by detecting the luminescence intensity of the thin oil film. From the resulting set of GLOF images, the thin oil film equation is solved to obtain an ensemble-averaged (steady) skin friction field as an inverse problem. In this paper, the formulation of a discrete linear system of equations for the LLS method is described, and an error analysis is given to identify the main error sources and the relevant parameters. Simulations were conducted to evaluate the accuracy of the LLS method and the effects of the image patterns, image noise, and sample numbers on the results in comparison with the previous snapshot-solution-averaging (SSA) method. An experimental case is shown to enable the comparison of the results obtained using conventional oil flow visualization and those obtained using both the LLS and SSA methods. The overall results show that the LLS method is more reliable than the SSA method and the LLS method can yield a more detailed skin friction topology in an objective way.

  14. Coupling photogrammetric data with DFN-DEM model for rock slope hazard assessment

    NASA Astrophysics Data System (ADS)

    Donze, Frederic; Scholtes, Luc; Bonilla-Sierra, Viviana; Elmouttie, Marc

    2013-04-01

    Structural and mechanical analyses of rock mass are key components for rock slope stability assessment. The complementary use of photogrammetric techniques [Poropat, 2001] and coupled DFN-DEM models [Harthong et al., 2012] provides a methodology that can be applied to complex 3D configurations. DFN-DEM formulation [Scholtès & Donzé, 2012a,b] has been chosen for modeling since it can explicitly take into account the fracture sets. Analyses conducted in 3D can produce very complex and unintuitive failure mechanisms. Therefore, a modeling strategy must be established in order to identify the key features which control the stability. For this purpose, a realistic case is presented to show the overall methodology from the photogrammetry acquisition to the mechanical modeling. By combining Sirovision and YADE Open DEM [Kozicki & Donzé, 2008, 2009], it can be shown that even for large camera to rock slope ranges (tested about one kilometer), the accuracy of the data are sufficient to assess the role of the structures on the stability of a jointed rock slope. In this case, on site stereo pairs of 2D images were taken to create 3D surface models. Then, digital identification of structural features on the unstable block zone was processed with Sirojoint software [Sirovision, 2010]. After acquiring the numerical topography, the 3D digitalized and meshed surface was imported into the YADE Open DEM platform to define the studied rock mass as a closed (manifold) volume to define the bounding volume for numerical modeling. The discontinuities were then imported as meshed planar elliptic surfaces into the model. The model was then submitted to gravity loading. During this step, high values of cohesion were assigned to the discontinuities in order to avoid failure or block displacements triggered by inertial effects. To assess the respective role of the pre-existing discontinuities in the block stability, different configurations have been tested as well as different degree of fracture persistency in order to enhance the possible contribution of rock bridges on the failure surface development. It is believed that the proposed methodology can bring valuable complementary information for rock slope stability analysis in presence of complex fractured system for which classical "Factor of Safety" is difficult to express. References • Harthong B., Scholtès L. & F.V. Donzé, Strength characterization of rock masses, using a coupled DEM-DFN model, Geophysical Journal International, doi: 10.1111/j.1365-246X.2012.05642.x, 2012. • Kozicki J & Donzé FV. YADE-OPEN DEM: an open--source software using a discrete element method to simulate granular material, Engineering Computations, 26(7):786-805, 2009 • Kozicki J, Donzé FV. A new open-source software developed for numerical simulations using discrete modeling methods, Comp. Meth. In Appl. Mech. And Eng. 197:4429-4443, 2008. • Poropat, G.V., New methods for mapping the structure of rock masses. In Proceedings, Explo 2001, Hunter Valley, New South Wales, 28-31 October 2001, pp. 253-260, 2001. • Scholtès, L. & Donzé FV. Modelling progressive failure in fractured rock masses using a 3D discrete element method, International Journal of Rock Mechanics and Mining Sciences, 52:18-30, 2012a. • Scholtès, L. & Donzé, F.-V., DEM model for soft and hard rocks: role of grain interlocking on strength, J. Mech. Phys. Solids, doi: 10.1016/j.jmps.2012.10.005, 2012b. • Sirovision, Commonwealth Scientific and Industrial Research Organisation CSIRO, Siro3D Sirovision 3D Imaging Mapping System Manual Version 4.1, 2010

  15. Multisource image fusion method using support value transform.

    PubMed

    Zheng, Sheng; Shi, Wen-Zhong; Liu, Jian; Zhu, Guang-Xi; Tian, Jin-Wen

    2007-07-01

    With the development of numerous imaging sensors, many images can be simultaneously pictured by various sensors. However, there are many scenarios where no one sensor can give the complete picture. Image fusion is an important approach to solve this problem and produces a single image which preserves all relevant information from a set of different sensors. In this paper, we proposed a new image fusion method using the support value transform, which uses the support value to represent the salient features of image. This is based on the fact that, in support vector machines (SVMs), the data with larger support values have a physical meaning in the sense that they reveal relative more importance of the data points for contributing to the SVM model. The mapped least squares SVM (mapped LS-SVM) is used to efficiently compute the support values of image. The support value analysis is developed by using a series of multiscale support value filters, which are obtained by filling zeros in the basic support value filter deduced from the mapped LS-SVM to match the resolution of the desired level. Compared with the widely used image fusion methods, such as the Laplacian pyramid, discrete wavelet transform methods, the proposed method is an undecimated transform-based approach. The fusion experiments are undertaken on multisource images. The results demonstrate that the proposed approach is effective and is superior to the conventional image fusion methods in terms of the pertained quantitative fusion evaluation indexes, such as quality of visual information (Q(AB/F)), the mutual information, etc.

  16. Discrete Walsh Hadamard transform based visible watermarking technique for digital color images

    NASA Astrophysics Data System (ADS)

    Santhi, V.; Thangavelu, Arunkumar

    2011-10-01

    As the size of the Internet is growing enormously the illegal manipulation of digital multimedia data become very easy with the advancement in technology tools. In order to protect those multimedia data from unauthorized access the digital watermarking system is used. In this paper a new Discrete walsh Hadamard Transform based visible watermarking system is proposed. As the watermark is embedded in transform domain, the system is robust to many signal processing attacks. Moreover in this proposed method the watermark is embedded in tiling manner in all the range of frequencies to make it robust to compression and cropping attack. The robustness of the algorithm is tested against noise addition, cropping, compression, Histogram equalization and resizing attacks. The experimental results show that the algorithm is robust to common signal processing attacks and the observed peak signal to noise ratio (PSNR) of watermarked image is varying from 20 to 30 db depends on the size of the watermark.

  17. Discrete-event system simulation on small and medium enterprises productivity improvement

    NASA Astrophysics Data System (ADS)

    Sulistio, J.; Hidayah, N. A.

    2017-12-01

    Small and medium industries in Indonesia is currently developing. The problem faced by SMEs is the difficulty of meeting growing demand coming into the company. Therefore, SME need an analysis and evaluation on its production process in order to meet all orders. The purpose of this research is to increase the productivity of SMEs production floor by applying discrete-event system simulation. This method preferred because it can solve complex problems die to the dynamic and stochastic nature of the system. To increase the credibility of the simulation, model validated by cooperating the average of two trials, two trials of variance and chi square test. Afterwards, Benferroni method applied to development several alternatives. The article concludes that, the productivity of SMEs production floor increased up to 50% by adding the capacity of dyeing and drying machines.

  18. A simple method to calculate first-passage time densities with arbitrary initial conditions

    NASA Astrophysics Data System (ADS)

    Nyberg, Markus; Ambjörnsson, Tobias; Lizana, Ludvig

    2016-06-01

    Numerous applications all the way from biology and physics to economics depend on the density of first crossings over a boundary. Motivated by the lack of general purpose analytical tools for computing first-passage time densities (FPTDs) for complex problems, we propose a new simple method based on the independent interval approximation (IIA). We generalise previous formulations of the IIA to include arbitrary initial conditions as well as to deal with discrete time and non-smooth continuous time processes. We derive a closed form expression for the FPTD in z and Laplace-transform space to a boundary in one dimension. Two classes of problems are analysed in detail: discrete time symmetric random walks (Markovian) and continuous time Gaussian stationary processes (Markovian and non-Markovian). Our results are in good agreement with Langevin dynamics simulations.

  19. Understanding Physiological and Degenerative Natural Vision Mechanisms to Define Contrast and Contour Operators

    PubMed Central

    Demongeot, Jacques; Fouquet, Yannick; Tayyab, Muhammad; Vuillerme, Nicolas

    2009-01-01

    Background Dynamical systems like neural networks based on lateral inhibition have a large field of applications in image processing, robotics and morphogenesis modeling. In this paper, we will propose some examples of dynamical flows used in image contrasting and contouring. Methodology First we present the physiological basis of the retina function by showing the role of the lateral inhibition in the optical illusions and pathologic processes generation. Then, based on these biological considerations about the real vision mechanisms, we study an enhancement method for contrasting medical images, using either a discrete neural network approach, or its continuous version, i.e. a non-isotropic diffusion reaction partial differential system. Following this, we introduce other continuous operators based on similar biomimetic approaches: a chemotactic contrasting method, a viability contouring algorithm and an attentional focus operator. Then, we introduce the new notion of mixed potential Hamiltonian flows; we compare it with the watershed method and we use it for contouring. Conclusions We conclude by showing the utility of these biomimetic methods with some examples of application in medical imaging and computed assisted surgery. PMID:19547712

  20. A discrete control model of PLANT

    NASA Technical Reports Server (NTRS)

    Mitchell, C. M.

    1985-01-01

    A model of the PLANT system using the discrete control modeling techniques developed by Miller is described. Discrete control models attempt to represent in a mathematical form how a human operator might decompose a complex system into simpler parts and how the control actions and system configuration are coordinated so that acceptable overall system performance is achieved. Basic questions include knowledge representation, information flow, and decision making in complex systems. The structure of the model is a general hierarchical/heterarchical scheme which structurally accounts for coordination and dynamic focus of attention. Mathematically, the discrete control model is defined in terms of a network of finite state systems. Specifically, the discrete control model accounts for how specific control actions are selected from information about the controlled system, the environment, and the context of the situation. The objective is to provide a plausible and empirically testable accounting and, if possible, explanation of control behavior.

Top