Wang, Xiaogang; Chen, Wen; Chen, Xudong
2015-03-09
In this paper, we develop a new optical information authentication system based on compressed double-random-phase-encoded images and quick-response (QR) codes, where the parameters of optical lightwave are used as keys for optical decryption and the QR code is a key for verification. An input image attached with QR code is first optically encoded in a simplified double random phase encoding (DRPE) scheme without using interferometric setup. From the single encoded intensity pattern recorded by a CCD camera, a compressed double-random-phase-encoded image, i.e., the sparse phase distribution used for optical decryption, is generated by using an iterative phase retrieval technique with QR code. We compare this technique to the other two methods proposed in literature, i.e., Fresnel domain information authentication based on the classical DRPE with holographic technique and information authentication based on DRPE and phase retrieval algorithm. Simulation results show that QR codes are effective on improving the security and data sparsity of optical information encryption and authentication system.
Negative base encoding in optical linear algebra processors
NASA Technical Reports Server (NTRS)
Perlee, C.; Casasent, D.
1986-01-01
In the digital multiplication by analog convolution algorithm, the bits of two encoded numbers are convolved to form the product of the two numbers in mixed binary representation; this output can be easily converted to binary. Attention is presently given to negative base encoding, treating base -2 initially, and then showing that the negative base system can be readily extended to any radix. In general, negative base encoding in optical linear algebra processors represents a more efficient technique than either sign magnitude or 2's complement encoding, when the additions of digitally encoded products are performed in parallel.
A novel shape-based coding-decoding technique for an industrial visual inspection system.
Mukherjee, Anirban; Chaudhuri, Subhasis; Dutta, Pranab K; Sen, Siddhartha; Patra, Amit
2004-01-01
This paper describes a unique single camera-based dimension storage method for image-based measurement. The system has been designed and implemented in one of the integrated steel plants of India. The purpose of the system is to encode the frontal cross-sectional area of an ingot. The encoded data will be stored in a database to facilitate the future manufacturing diagnostic process. The compression efficiency and reconstruction error of the lossy encoding technique have been reported and found to be quite encouraging.
Error-free holographic frames encryption with CA pixel-permutation encoding algorithm
NASA Astrophysics Data System (ADS)
Li, Xiaowei; Xiao, Dan; Wang, Qiong-Hua
2018-01-01
The security of video data is necessary in network security transmission hence cryptography is technique to make video data secure and unreadable to unauthorized users. In this paper, we propose a holographic frames encryption technique based on the cellular automata (CA) pixel-permutation encoding algorithm. The concise pixel-permutation algorithm is used to address the drawbacks of the traditional CA encoding methods. The effectiveness of the proposed video encoding method is demonstrated by simulation examples.
A new methodology for vibration error compensation of optical encoders.
Lopez, Jesus; Artes, Mariano
2012-01-01
Optical encoders are sensors based on grating interference patterns. Tolerances inherent to the manufacturing process can induce errors in the position accuracy as the measurement signals stand apart from the ideal conditions. In case the encoder is working under vibrations, the oscillating movement of the scanning head is registered by the encoder system as a displacement, introducing an error into the counter to be added up to graduation, system and installation errors. Behavior improvement can be based on different techniques trying to compensate the error from measurement signals processing. In this work a new "ad hoc" methodology is presented to compensate the error of the encoder when is working under the influence of vibration. The methodology is based on fitting techniques to the Lissajous figure of the deteriorated measurement signals and the use of a look up table, giving as a result a compensation procedure in which a higher accuracy of the sensor is obtained.
Wavelet filtered shifted phase-encoded joint transform correlation for face recognition
NASA Astrophysics Data System (ADS)
Moniruzzaman, Md.; Alam, Mohammad S.
2017-05-01
A new wavelet-filtered-based Shifted- phase-encoded Joint Transform Correlation (WPJTC) technique has been proposed for efficient face recognition. The proposed technique uses discrete wavelet decomposition for preprocessing and can effectively accommodate various 3D facial distortions, effects of noise, and illumination variations. After analyzing different forms of wavelet basis functions, an optimal method has been proposed by considering the discrimination capability and processing speed as performance trade-offs. The proposed technique yields better correlation discrimination compared to alternate pattern recognition techniques such as phase-shifted phase-encoded fringe-adjusted joint transform correlator. The performance of the proposed WPJTC has been tested using the Yale facial database and extended Yale facial database under different environments such as illumination variation, noise, and 3D changes in facial expressions. Test results show that the proposed WPJTC yields better performance compared to alternate JTC based face recognition techniques.
Information encoder/decoder using chaotic systems
Miller, Samuel Lee; Miller, William Michael; McWhorter, Paul Jackson
1997-01-01
The present invention discloses a chaotic system-based information encoder and decoder that operates according to a relationship defining a chaotic system. Encoder input signals modify the dynamics of the chaotic system comprising the encoder. The modifications result in chaotic, encoder output signals that contain the encoder input signals encoded within them. The encoder output signals are then capable of secure transmissions using conventional transmission techniques. A decoder receives the encoder output signals (i.e., decoder input signals) and inverts the dynamics of the encoding system to directly reconstruct the original encoder input signals.
Information encoder/decoder using chaotic systems
Miller, S.L.; Miller, W.M.; McWhorter, P.J.
1997-10-21
The present invention discloses a chaotic system-based information encoder and decoder that operates according to a relationship defining a chaotic system. Encoder input signals modify the dynamics of the chaotic system comprising the encoder. The modifications result in chaotic, encoder output signals that contain the encoder input signals encoded within them. The encoder output signals are then capable of secure transmissions using conventional transmission techniques. A decoder receives the encoder output signals (i.e., decoder input signals) and inverts the dynamics of the encoding system to directly reconstruct the original encoder input signals. 32 figs.
NASA Astrophysics Data System (ADS)
Yuan, Sheng; Yang, Yangrui; Liu, Xuemei; Zhou, Xin; Wei, Zhenzhuo
2018-01-01
An optical image transformation and encryption scheme is proposed based on double random-phase encoding (DRPE) and compressive ghost imaging (CGI) techniques. In this scheme, a secret image is first transformed into a binary image with the phase-retrieval-based DRPE technique, and then encoded by a series of random amplitude patterns according to the ghost imaging (GI) principle. Compressive sensing, corrosion and expansion operations are implemented to retrieve the secret image in the decryption process. This encryption scheme takes the advantage of complementary capabilities offered by the phase-retrieval-based DRPE and GI-based encryption techniques. That is the phase-retrieval-based DRPE is used to overcome the blurring defect of the decrypted image in the GI-based encryption, and the CGI not only reduces the data amount of the ciphertext, but also enhances the security of DRPE. Computer simulation results are presented to verify the performance of the proposed encryption scheme.
Data Embedding for Covert Communications, Digital Watermarking, and Information Augmentation
2000-03-01
proposed an image authentication algorithm based on the fragility of messages embedded in digital images using LSB encoding. In [Walt95], he proposes...Invertibility 2/ 3 SAMPLE DATA EMBEDDING TECHNIQUES 23 3.1 SPATIAL TECHNIQUES 23 LSB Encoding in Intensity Images 23 Data embedding...ATTACK 21 FIGURE 6. EFFECTS OF LSB ENCODING 25 FIGURE 7. ALGORITHM FOR EZSTEGO 28 FIGURE 8. DATA EMBEDDING IN THE FREQUENCY DOMAIN 30 FIGURE 9
A New Methodology for Vibration Error Compensation of Optical Encoders
Lopez, Jesus; Artes, Mariano
2012-01-01
Optical encoders are sensors based on grating interference patterns. Tolerances inherent to the manufacturing process can induce errors in the position accuracy as the measurement signals stand apart from the ideal conditions. In case the encoder is working under vibrations, the oscillating movement of the scanning head is registered by the encoder system as a displacement, introducing an error into the counter to be added up to graduation, system and installation errors. Behavior improvement can be based on different techniques trying to compensate the error from measurement signals processing. In this work a new “ad hoc” methodology is presented to compensate the error of the encoder when is working under the influence of vibration. The methodology is based on fitting techniques to the Lissajous figure of the deteriorated measurement signals and the use of a look up table, giving as a result a compensation procedure in which a higher accuracy of the sensor is obtained. PMID:22666067
A review on "A Novel Technique for Image Steganography Based on Block-DCT and Huffman Encoding"
NASA Astrophysics Data System (ADS)
Das, Rig; Tuithung, Themrichon
2013-03-01
This paper reviews the embedding and extraction algorithm proposed by "A. Nag, S. Biswas, D. Sarkar and P. P. Sarkar" on "A Novel Technique for Image Steganography based on Block-DCT and Huffman Encoding" in "International Journal of Computer Science and Information Technology, Volume 2, Number 3, June 2010" [3] and shows that the Extraction of Secret Image is Not Possible for the algorithm proposed in [3]. 8 bit Cover Image of size is divided into non joint blocks and a two dimensional Discrete Cosine Transformation (2-D DCT) is performed on each of the blocks. Huffman Encoding is performed on an 8 bit Secret Image of size and each bit of the Huffman Encoded Bit Stream is embedded in the frequency domain by altering the LSB of the DCT coefficients of Cover Image blocks. The Huffman Encoded Bit Stream and Huffman Table
A novel attack method about double-random-phase-encoding-based image hiding method
NASA Astrophysics Data System (ADS)
Xu, Hongsheng; Xiao, Zhijun; Zhu, Xianchen
2018-03-01
By using optical image processing techniques, a novel text encryption and hiding method applied by double-random phase-encoding technique is proposed in the paper. The first step is that the secret message is transformed into a 2-dimension array. The higher bits of the elements in the array are used to fill with the bit stream of the secret text, while the lower bits are stored specific values. Then, the transformed array is encoded by double random phase encoding technique. Last, the encoded array is embedded on a public host image to obtain the image embedded with hidden text. The performance of the proposed technique is tested via analytical modeling and test data stream. Experimental results show that the secret text can be recovered either accurately or almost accurately, while maintaining the quality of the host image embedded with hidden data by properly selecting the method of transforming the secret text into an array and the superimposition coefficient.
Information hiding based on double random-phase encoding and public-key cryptography.
Sheng, Yuan; Xin, Zhou; Alam, Mohammed S; Xi, Lu; Xiao-Feng, Li
2009-03-02
A novel information hiding method based on double random-phase encoding (DRPE) and Rivest-Shamir-Adleman (RSA) public-key cryptosystem is proposed. In the proposed technique, the inherent diffusion property of DRPE is cleverly utilized to make up the diffusion insufficiency of RSA public-key cryptography, while the RSA cryptosystem is utilized for simultaneous transmission of the cipher text and the two phase-masks, which is not possible under the DRPE technique. This technique combines the complementary advantages of the DPRE and RSA encryption techniques and brings security and convenience for efficient information transmission. Extensive numerical simulation results are presented to verify the performance of the proposed technique.
SEMG signal compression based on two-dimensional techniques.
de Melo, Wheidima Carneiro; de Lima Filho, Eddie Batista; da Silva Júnior, Waldir Sabino
2016-04-18
Recently, two-dimensional techniques have been successfully employed for compressing surface electromyographic (SEMG) records as images, through the use of image and video encoders. Such schemes usually provide specific compressors, which are tuned for SEMG data, or employ preprocessing techniques, before the two-dimensional encoding procedure, in order to provide a suitable data organization, whose correlations can be better exploited by off-the-shelf encoders. Besides preprocessing input matrices, one may also depart from those approaches and employ an adaptive framework, which is able to directly tackle SEMG signals reassembled as images. This paper proposes a new two-dimensional approach for SEMG signal compression, which is based on a recurrent pattern matching algorithm called multidimensional multiscale parser (MMP). The mentioned encoder was modified, in order to efficiently work with SEMG signals and exploit their inherent redundancies. Moreover, a new preprocessing technique, named as segmentation by similarity (SbS), which has the potential to enhance the exploitation of intra- and intersegment correlations, is introduced, the percentage difference sorting (PDS) algorithm is employed, with different image compressors, and results with the high efficiency video coding (HEVC), H.264/AVC, and JPEG2000 encoders are presented. Experiments were carried out with real isometric and dynamic records, acquired in laboratory. Dynamic signals compressed with H.264/AVC and HEVC, when combined with preprocessing techniques, resulted in good percent root-mean-square difference [Formula: see text] compression factor figures, for low and high compression factors, respectively. Besides, regarding isometric signals, the modified two-dimensional MMP algorithm outperformed state-of-the-art schemes, for low compression factors, the combination between SbS and HEVC proved to be competitive, for high compression factors, and JPEG2000, combined with PDS, provided good performance allied to low computational complexity, all in terms of percent root-mean-square difference [Formula: see text] compression factor. The proposed schemes are effective and, specifically, the modified MMP algorithm can be considered as an interesting alternative for isometric signals, regarding traditional SEMG encoders. Besides, the approach based on off-the-shelf image encoders has the potential of fast implementation and dissemination, given that many embedded systems may already have such encoders available, in the underlying hardware/software architecture.
Low-Density Parity-Check Code Design Techniques to Simplify Encoding
NASA Astrophysics Data System (ADS)
Perez, J. M.; Andrews, K.
2007-11-01
This work describes a method for encoding low-density parity-check (LDPC) codes based on the accumulate-repeat-4-jagged-accumulate (AR4JA) scheme, using the low-density parity-check matrix H instead of the dense generator matrix G. The use of the H matrix to encode allows a significant reduction in memory consumption and provides the encoder design a great flexibility. Also described are new hardware-efficient codes, based on the same kind of protographs, which require less memory storage and area, allowing at the same time a reduction in the encoding delay.
Combination Base64 Algorithm and EOF Technique for Steganography
NASA Astrophysics Data System (ADS)
Rahim, Robbi; Nurdiyanto, Heri; Hidayat, Rahmat; Saleh Ahmar, Ansari; Siregar, Dodi; Putera Utama Siahaan, Andysah; Faisal, Ilham; Rahman, Sayuti; Suita, Diana; Zamsuri, Ahmad; Abdullah, Dahlan; Napitupulu, Darmawan; Ikhsan Setiawan, Muhammad; Sriadhi, S.
2018-04-01
The steganography process combines mathematics and computer science. Steganography consists of a set of methods and techniques to embed the data into another media so that the contents are unreadable to anyone who does not have the authority to read these data. The main objective of the use of base64 method is to convert any file in order to achieve privacy. This paper discusses a steganography and encoding method using base64, which is a set of encoding schemes that convert the same binary data to the form of a series of ASCII code. Also, the EoF technique is used to embed encoding text performed by Base64. As an example, for the mechanisms a file is used to represent the texts, and by using the two methods together will increase the security level for protecting the data, this research aims to secure many types of files in a particular media with a good security and not to damage the stored files and coverage media that used.
A Multi-Encoding Approach for LTL Symbolic Satisfiability Checking
NASA Technical Reports Server (NTRS)
Rozier, Kristin Y.; Vardi, Moshe Y.
2011-01-01
Formal behavioral specifications written early in the system-design process and communicated across all design phases have been shown to increase the efficiency, consistency, and quality of the system under development. To prevent introducing design or verification errors, it is crucial to test specifications for satisfiability. Our focus here is on specifications expressed in linear temporal logic (LTL). We introduce a novel encoding of symbolic transition-based Buchi automata and a novel, "sloppy," transition encoding, both of which result in improved scalability. We also define novel BDD variable orders based on tree decomposition of formula parse trees. We describe and extensively test a new multi-encoding approach utilizing these novel encoding techniques to create 30 encoding variations. We show that our novel encodings translate to significant, sometimes exponential, improvement over the current standard encoding for symbolic LTL satisfiability checking.
Efficient Text Encryption and Hiding with Double-Random Phase-Encoding
Sang, Jun; Ling, Shenggui; Alam, Mohammad S.
2012-01-01
In this paper, a double-random phase-encoding technique-based text encryption and hiding method is proposed. First, the secret text is transformed into a 2-dimensional array and the higher bits of the elements in the transformed array are used to store the bit stream of the secret text, while the lower bits are filled with specific values. Then, the transformed array is encoded with double-random phase-encoding technique. Finally, the encoded array is superimposed on an expanded host image to obtain the image embedded with hidden data. The performance of the proposed technique, including the hiding capacity, the recovery accuracy of the secret text, and the quality of the image embedded with hidden data, is tested via analytical modeling and test data stream. Experimental results show that the secret text can be recovered either accurately or almost accurately, while maintaining the quality of the host image embedded with hidden data by properly selecting the method of transforming the secret text into an array and the superimposition coefficient. By using optical information processing techniques, the proposed method has been found to significantly improve the security of text information transmission, while ensuring hiding capacity at a prescribed level. PMID:23202003
Information fusion based techniques for HEVC
NASA Astrophysics Data System (ADS)
Fernández, D. G.; Del Barrio, A. A.; Botella, Guillermo; Meyer-Baese, Uwe; Meyer-Baese, Anke; Grecos, Christos
2017-05-01
Aiming at the conflict circumstances of multi-parameter H.265/HEVC encoder system, the present paper introduces the analysis of many optimizations' set in order to improve the trade-off between quality, performance and power consumption for different reliable and accurate applications. This method is based on the Pareto optimization and has been tested with different resolutions on real-time encoders.
Digital Signal Processing Based Biotelemetry Receivers
NASA Technical Reports Server (NTRS)
Singh, Avtar; Hines, John; Somps, Chris
1997-01-01
This is an attempt to develop a biotelemetry receiver using digital signal processing technology and techniques. The receiver developed in this work is based on recovering signals that have been encoded using either Pulse Position Modulation (PPM) or Pulse Code Modulation (PCM) technique. A prototype has been developed using state-of-the-art digital signal processing technology. A Printed Circuit Board (PCB) is being developed based on the technique and technology described here. This board is intended to be used in the UCSF Fetal Monitoring system developed at NASA. The board is capable of handling a variety of PPM and PCM signals encoding signals such as ECG, temperature, and pressure. A signal processing program has also been developed to analyze the received ECG signal to determine heart rate. This system provides a base for using digital signal processing in biotelemetry receivers and other similar applications.
NASA Astrophysics Data System (ADS)
Ghosh, B.; Hazra, S.; Haldar, N.; Roy, D.; Patra, S. N.; Swarnakar, J.; Sarkar, P. P.; Mukhopadhyay, S.
2018-03-01
Since last few decades optics has already proved its strong potentiality for conducting parallel logic, arithmetic and algebraic operations due to its super-fast speed in communication and computation. So many different logical and sequential operations using all optical frequency encoding technique have been proposed by several authors. Here, we have keened out all optical dibit representation technique, which has the advantages of high speed operation as well as reducing the bit error problem. Exploiting this phenomenon, we have proposed all optical frequency encoded dibit based XOR and XNOR logic gates using the optical switches like add/drop multiplexer (ADM) and reflected semiconductor optical amplifier (RSOA). Also the operations of these gates have been verified through proper simulation using MATLAB (R2008a).
Differential Binary Encoding Method for Calibrating Image Sensors Based on IOFBs
Fernández, Pedro R.; Lázaro-Galilea, José Luis; Gardel, Alfredo; Espinosa, Felipe; Bravo, Ignacio; Cano, Ángel
2012-01-01
Image transmission using incoherent optical fiber bundles (IOFBs) requires prior calibration to obtain the spatial in-out fiber correspondence necessary to reconstruct the image captured by the pseudo-sensor. This information is recorded in a Look-Up Table called the Reconstruction Table (RT), used later for reordering the fiber positions and reconstructing the original image. This paper presents a very fast method based on image-scanning using spaces encoded by a weighted binary code to obtain the in-out correspondence. The results demonstrate that this technique yields a remarkable reduction in processing time and the image reconstruction quality is very good compared to previous techniques based on spot or line scanning, for example. PMID:22666023
Review of Random Phase Encoding in Volume Holographic Storage
Su, Wei-Chia; Sun, Ching-Cherng
2012-01-01
Random phase encoding is a unique technique for volume hologram which can be applied to various applications such as holographic multiplexing storage, image encryption, and optical sensing. In this review article, we first review and discuss diffraction selectivity of random phase encoding in volume holograms, which is the most important parameter related to multiplexing capacity of volume holographic storage. We then review an image encryption system based on random phase encoding. The alignment of phase key for decryption of the encoded image stored in holographic memory is analyzed and discussed. In the latter part of the review, an all-optical sensing system implemented by random phase encoding and holographic interconnection is presented.
2016-09-01
Thanks to the elegant reciprocal geometry of the Sagnac interferometer, many sources of drift that would present in other polarimetry techniques were...interferometers. And is 2 orders of magnitude better than competing polarimetry -based Faraday techniques. Couple a Rb Vapor cell to the Sagnac interferometer
Biometrics based key management of double random phase encoding scheme using error control codes
NASA Astrophysics Data System (ADS)
Saini, Nirmala; Sinha, Aloka
2013-08-01
In this paper, an optical security system has been proposed in which key of the double random phase encoding technique is linked to the biometrics of the user to make it user specific. The error in recognition due to the biometric variation is corrected by encoding the key using the BCH code. A user specific shuffling key is used to increase the separation between genuine and impostor Hamming distance distribution. This shuffling key is then further secured using the RSA public key encryption to enhance the security of the system. XOR operation is performed between the encoded key and the feature vector obtained from the biometrics. The RSA encoded shuffling key and the data obtained from the XOR operation are stored into a token. The main advantage of the present technique is that the key retrieval is possible only in the simultaneous presence of the token and the biometrics of the user which not only authenticates the presence of the original input but also secures the key of the system. Computational experiments showed the effectiveness of the proposed technique for key retrieval in the decryption process by using the live biometrics of the user.
Wavelength-encoded tomography based on optical temporal Fourier transform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Chi; Wong, Kenneth K. Y., E-mail: kywong@eee.hku.hk
We propose and demonstrate a technique called wavelength-encoded tomography (WET) for non-invasive optical cross-sectional imaging, particularly beneficial in biological system. The WET utilizes time-lens to perform the optical Fourier transform, and the time-to-wavelength conversion generates a wavelength-encoded image of optical scattering from internal microstructures, analogous to the interferometery-based imaging such as optical coherence tomography. Optical Fourier transform, in principle, comes with twice as good axial resolution over the electrical Fourier transform, and will greatly simplify the digital signal processing after the data acquisition. As a proof-of-principle demonstration, a 150 -μm (ideally 36 μm) resolution is achieved based on a 7.5-nm bandwidth swept-pump,more » using a conventional optical spectrum analyzer. This approach can potentially achieve up to 100-MHz or even higher frame rate with some proven ultrafast spectrum analyzer. We believe that this technique is innovative towards the next-generation ultrafast optical tomographic imaging application.« less
Compressed domain indexing of losslessly compressed images
NASA Astrophysics Data System (ADS)
Schaefer, Gerald
2001-12-01
Image retrieval and image compression have been pursued separately in the past. Only little research has been done on a synthesis of the two by allowing image retrieval to be performed directly in the compressed domain of images without the need to uncompress them first. In this paper methods for image retrieval in the compressed domain of losslessly compressed images are introduced. While most image compression techniques are lossy, i.e. discard visually less significant information, lossless techniques are still required in fields like medical imaging or in situations where images must not be changed due to legal reasons. The algorithms in this paper are based on predictive coding methods where a pixel is encoded based on the pixel values of its (already encoded) neighborhood. The first method is based on an understanding that predictively coded data is itself indexable and represents a textural description of the image. The second method operates directly on the entropy encoded data by comparing codebooks of images. Experiments show good image retrieval results for both approaches.
On-Chip Neural Data Compression Based On Compressed Sensing With Sparse Sensing Matrices.
Zhao, Wenfeng; Sun, Biao; Wu, Tong; Yang, Zhi
2018-02-01
On-chip neural data compression is an enabling technique for wireless neural interfaces that suffer from insufficient bandwidth and power budgets to transmit the raw data. The data compression algorithm and its implementation should be power and area efficient and functionally reliable over different datasets. Compressed sensing is an emerging technique that has been applied to compress various neurophysiological data. However, the state-of-the-art compressed sensing (CS) encoders leverage random but dense binary measurement matrices, which incur substantial implementation costs on both power and area that could offset the benefits from the reduced wireless data rate. In this paper, we propose two CS encoder designs based on sparse measurement matrices that could lead to efficient hardware implementation. Specifically, two different approaches for the construction of sparse measurement matrices, i.e., the deterministic quasi-cyclic array code (QCAC) matrix and -sparse random binary matrix [-SRBM] are exploited. We demonstrate that the proposed CS encoders lead to comparable recovery performance. And efficient VLSI architecture designs are proposed for QCAC-CS and -SRBM encoders with reduced area and total power consumption.
Luis Martínez Fuentes, Jose; Moreno, Ignacio
2018-03-05
A new technique for encoding the amplitude and phase of diffracted fields in digital holography is proposed. It is based on a random spatial multiplexing of two phase-only diffractive patterns. The first one is the phase information of the intended pattern, while the second one is a diverging optical element whose purpose is the control of the amplitude. A random number determines the choice between these two diffractive patterns at each pixel, and the amplitude information of the desired field governs its discrimination threshold. This proposed technique is computationally fast and does not require iterative methods, and the complex field reconstruction appears on axis. We experimentally demonstrate this new encoding technique with holograms implemented onto a flicker-free phase-only spatial light modulator (SLM), which allows the axial generation of such holograms. The experimental verification includes the phase measurement of generated patterns with a phase-shifting polarization interferometer implemented in the same experimental setup.
Improving both imaging speed and spatial resolution in MR-guided neurosurgery
NASA Astrophysics Data System (ADS)
Liu, Haiying; Hall, Walter A.; Truwit, Charles L.
2002-05-01
A robust near real-time MRI based surgical guidance scheme has been developed and used in neurosurgical procedure performed in our combined 1.5 Tesla MR operating room. Because of the increased susceptibility difference in the area of surgical site during surgery, the preferred real- time imaging technique is a single shot imaging sequence based on the concept of the half acquisition with turbo spin echoes (HASTE). In order to maintain sufficient spatial resolution for visualizing the surgical devices, such as a biopsy needle and catheter, we used focused field of view (FOV) in the phase-encoding (PE) direction coupled with an out-volume signal suppression (OVS) technique. The key concept of the method is to minimize the total number of the required phase encoding steps and the effective echo time (TE) as well as the longest TE for the high spatial encoding step. The concept has been first demonstrated with a phantom experiment, which showed when the water was doped with Gd- DTPA to match the relaxation rates of the brain tissue there was a significant spatial blurring primarily along the phase encoding direction if the conventional HASTE technique, and the new scheme indeed minimized the spatial blur in the resulting image and improved the needle visualization as anticipated. Using the new scheme in a typical MR-guided neurobiopsy procedure, the brain biopsy needle was easily seen against the tissue background with minimal blurring due the inevitable T2 signal decay even when the PE direction was set parallel to the needle axis. This MR based guidance technique has practically allowed neurosurgeons to visualize the biopsy needle and to monitor its insertion with a better certainty at near real-time pace.
Program risk analysis handbook
NASA Technical Reports Server (NTRS)
Batson, R. G.
1987-01-01
NASA regulations specify that formal risk analysis be performed on a program at each of several milestones. Program risk analysis is discussed as a systems analysis approach, an iterative process (identification, assessment, management), and a collection of techniques. These techniques, which range from extremely simple to complex network-based simulation, are described in this handbook in order to provide both analyst and manager with a guide for selection of the most appropriate technique. All program risk assessment techniques are shown to be based on elicitation and encoding of subjective probability estimates from the various area experts on a program. Techniques to encode the five most common distribution types are given. Then, a total of twelve distinct approaches to risk assessment are given. Steps involved, good and bad points, time involved, and degree of computer support needed are listed. Why risk analysis should be used by all NASA program managers is discussed. Tools available at NASA-MSFC are identified, along with commercially available software. Bibliography (150 entries) and a program risk analysis check-list are provided.
NASA Astrophysics Data System (ADS)
Chuang, Cheng-Hung; Chen, Yen-Lin
2013-02-01
This study presents a steganographic optical image encryption system based on reversible data hiding and double random phase encoding (DRPE) techniques. Conventional optical image encryption systems can securely transmit valuable images using an encryption method for possible application in optical transmission systems. The steganographic optical image encryption system based on the DRPE technique has been investigated to hide secret data in encrypted images. However, the DRPE techniques vulnerable to attacks and many of the data hiding methods in the DRPE system can distort the decrypted images. The proposed system, based on reversible data hiding, uses a JBIG2 compression scheme to achieve lossless decrypted image quality and perform a prior encryption process. Thus, the DRPE technique enables a more secured optical encryption process. The proposed method extracts and compresses the bit planes of the original image using the lossless JBIG2 technique. The secret data are embedded in the remaining storage space. The RSA algorithm can cipher the compressed binary bits and secret data for advanced security. Experimental results show that the proposed system achieves a high data embedding capacity and lossless reconstruction of the original images.
Simultaneously driven linear and nonlinear spatial encoding fields in MRI.
Gallichan, Daniel; Cocosco, Chris A; Dewdney, Andrew; Schultz, Gerrit; Welz, Anna; Hennig, Jürgen; Zaitsev, Maxim
2011-03-01
Spatial encoding in MRI is conventionally achieved by the application of switchable linear encoding fields. The general concept of the recently introduced PatLoc (Parallel Imaging Technique using Localized Gradients) encoding is to use nonlinear fields to achieve spatial encoding. Relaxing the requirement that the encoding fields must be linear may lead to improved gradient performance or reduced peripheral nerve stimulation. In this work, a custom-built insert coil capable of generating two independent quadratic encoding fields was driven with high-performance amplifiers within a clinical MR system. In combination with the three linear encoding fields, the combined hardware is capable of independently manipulating five spatial encoding fields. With the linear z-gradient used for slice-selection, there remain four separate channels to encode a 2D-image. To compare trajectories of such multidimensional encoding, the concept of a local k-space is developed. Through simulations, reconstructions using six gradient-encoding strategies were compared, including Cartesian encoding separately or simultaneously on both PatLoc and linear gradients as well as two versions of a radial-based in/out trajectory. Corresponding experiments confirmed that such multidimensional encoding is practically achievable and demonstrated that the new radial-based trajectory offers the PatLoc property of variable spatial resolution while maintaining finite resolution across the entire field-of-view. Copyright © 2010 Wiley-Liss, Inc.
Yamada, Haruyasu; Abe, Osamu; Shizukuishi, Takashi; Kikuta, Junko; Shinozaki, Takahiro; Dezawa, Ko; Nagano, Akira; Matsuda, Masayuki; Haradome, Hiroki; Imamura, Yoshiki
2014-01-01
Diffusion imaging is a unique noninvasive tool to detect brain white matter trajectory and integrity in vivo. However, this technique suffers from spatial distortion and signal pileup or dropout originating from local susceptibility gradients and eddy currents. Although there are several methods to mitigate these problems, most techniques can be applicable either to susceptibility or eddy-current induced distortion alone with a few exceptions. The present study compared the correction efficiency of FSL tools, “eddy_correct” and the combination of “eddy” and “topup” in terms of diffusion-derived fractional anisotropy (FA). The brain diffusion images were acquired from 10 healthy subjects using 30 and 60 directions encoding schemes based on the electrostatic repulsive forces. For the 30 directions encoding, 2 sets of diffusion images were acquired with the same parameters, except for the phase-encode blips which had opposing polarities along the anteroposterior direction. For the 60 directions encoding, non–diffusion-weighted and diffusion-weighted images were obtained with forward phase-encoding blips and non–diffusion-weighted images with the same parameter, except for the phase-encode blips, which had opposing polarities. FA images without and with distortion correction were compared in a voxel-wise manner with tract-based spatial statistics. We showed that images corrected with eddy and topup possessed higher FA values than images uncorrected and corrected with eddy_correct with trilinear (FSL default setting) or spline interpolation in most white matter skeletons, using both encoding schemes. Furthermore, the 60 directions encoding scheme was superior as measured by increased FA values to the 30 directions encoding scheme, despite comparable acquisition time. This study supports the combination of eddy and topup as a superior correction tool in diffusion imaging rather than the eddy_correct tool, especially with trilinear interpolation, using 60 directions encoding scheme. PMID:25405472
Yamada, Haruyasu; Abe, Osamu; Shizukuishi, Takashi; Kikuta, Junko; Shinozaki, Takahiro; Dezawa, Ko; Nagano, Akira; Matsuda, Masayuki; Haradome, Hiroki; Imamura, Yoshiki
2014-01-01
Diffusion imaging is a unique noninvasive tool to detect brain white matter trajectory and integrity in vivo. However, this technique suffers from spatial distortion and signal pileup or dropout originating from local susceptibility gradients and eddy currents. Although there are several methods to mitigate these problems, most techniques can be applicable either to susceptibility or eddy-current induced distortion alone with a few exceptions. The present study compared the correction efficiency of FSL tools, "eddy_correct" and the combination of "eddy" and "topup" in terms of diffusion-derived fractional anisotropy (FA). The brain diffusion images were acquired from 10 healthy subjects using 30 and 60 directions encoding schemes based on the electrostatic repulsive forces. For the 30 directions encoding, 2 sets of diffusion images were acquired with the same parameters, except for the phase-encode blips which had opposing polarities along the anteroposterior direction. For the 60 directions encoding, non-diffusion-weighted and diffusion-weighted images were obtained with forward phase-encoding blips and non-diffusion-weighted images with the same parameter, except for the phase-encode blips, which had opposing polarities. FA images without and with distortion correction were compared in a voxel-wise manner with tract-based spatial statistics. We showed that images corrected with eddy and topup possessed higher FA values than images uncorrected and corrected with eddy_correct with trilinear (FSL default setting) or spline interpolation in most white matter skeletons, using both encoding schemes. Furthermore, the 60 directions encoding scheme was superior as measured by increased FA values to the 30 directions encoding scheme, despite comparable acquisition time. This study supports the combination of eddy and topup as a superior correction tool in diffusion imaging rather than the eddy_correct tool, especially with trilinear interpolation, using 60 directions encoding scheme.
An Investigation of Differential Encoding and Retrieval in Older Adult College Students.
ERIC Educational Resources Information Center
Shaughnessy, Michael F.; Reif, Laurie
Three experiments were conducted in order to clarify the encoding/retrieval dilemma in older adult students; and the recognition/recall test issue was also explored. First, a mnemonic technique based on the "key word" method of Funk and Tarshis was used; secondly, a semantic processing task was tried; and lastly, a repetition task, based…
USDA-ARS?s Scientific Manuscript database
The molecular biological techniques for plasmid-based assembly and cloning of gene open reading frames are essential for elucidating the function of the proteins encoded by the genes. These techniques involve the production of full-length cDNA libraries as a source of plasmid-based clones to expres...
Fast non-interferometric iterative phase retrieval for holographic data storage.
Lin, Xiao; Huang, Yong; Shimura, Tsutomu; Fujimura, Ryushi; Tanaka, Yoshito; Endo, Masao; Nishimoto, Hajimu; Liu, Jinpeng; Li, Yang; Liu, Ying; Tan, Xiaodi
2017-12-11
Fast non-interferometric phase retrieval is a very important technique for phase-encoded holographic data storage and other phase based applications due to its advantage of easy implementation, simple system setup, and robust noise tolerance. Here we present an iterative non-interferometric phase retrieval for 4-level phase encoded holographic data storage based on an iterative Fourier transform algorithm and known portion of the encoded data, which increases the storage code rate to two-times that of an amplitude based method. Only a single image at the Fourier plane of the beam is captured for the iterative reconstruction. Since beam intensity at the Fourier plane of the reconstructed beam is more concentrated than the reconstructed beam itself, the requirement of diffractive efficiency of the recording media is reduced, which will improve the dynamic range of recording media significantly. The phase retrieval only requires 10 iterations to achieve a less than 5% phase data error rate, which is successfully demonstrated by recording and reconstructing a test image data experimentally. We believe our method will further advance the holographic data storage technique in the era of big data.
Classified one-step high-radix signed-digit arithmetic units
NASA Astrophysics Data System (ADS)
Cherri, Abdallah K.
1998-08-01
High-radix number systems enable higher information storage density, less complexity, fewer system components, and fewer cascaded gates and operations. A simple one-step fully parallel high-radix signed-digit arithmetic is proposed for parallel optical computing based on new joint spatial encodings. This reduces hardware requirements and improves throughput by reducing the space-bandwidth produce needed. The high-radix signed-digit arithmetic operations are based on classifying the neighboring input digit pairs into various groups to reduce the computation rules. A new joint spatial encoding technique is developed to present both the operands and the computation rules. This technique increases the spatial bandwidth product of the spatial light modulators of the system. An optical implementation of the proposed high-radix signed-digit arithmetic operations is also presented. It is shown that our one-step trinary signed-digit and quaternary signed-digit arithmetic units are much simpler and better than all previously reported high-radix signed-digit techniques.
MR imaging of ore for heap bioleaching studies using pure phase encode acquisition methods
NASA Astrophysics Data System (ADS)
Fagan, Marijke A.; Sederman, Andrew J.; Johns, Michael L.
2012-03-01
Various MRI techniques were considered with respect to imaging of aqueous flow fields in low grade copper ore. Spin echo frequency encoded techniques were shown to produce unacceptable image distortions which led to pure phase encoded techniques being considered. Single point imaging multiple point acquisition (SPI-MPA) and spin echo single point imaging (SESPI) techniques were applied. By direct comparison with X-ray tomographic images, both techniques were found to be able to produce distortion-free images of the ore packings at 2 T. The signal to noise ratios (SNRs) of the SESPI images were found to be superior to SPI-MPA for equal total acquisition times; this was explained based on NMR relaxation measurements. SESPI was also found to produce suitable images for a range of particles sizes, whereas SPI-MPA SNR deteriorated markedly as particles size was reduced. Comparisons on a 4.7 T magnet showed significant signal loss from the SPI-MPA images, the effect of which was accentuated in the case of unsaturated flowing systems. Hence it was concluded that SESPI was the most robust imaging method for the study of copper ore heap leaching hydrology.
Simultaneous transmission for an encrypted image and a double random-phase encryption key
NASA Astrophysics Data System (ADS)
Yuan, Sheng; Zhou, Xin; Li, Da-Hai; Zhou, Ding-Fu
2007-06-01
We propose a method to simultaneously transmit double random-phase encryption key and an encrypted image by making use of the fact that an acceptable decryption result can be obtained when only partial data of the encrypted image have been taken in the decryption process. First, the original image data are encoded as an encrypted image by a double random-phase encryption technique. Second, a double random-phase encryption key is encoded as an encoded key by the Rivest-Shamir-Adelman (RSA) public-key encryption algorithm. Then the amplitude of the encrypted image is modulated by the encoded key to form what we call an encoded image. Finally, the encoded image that carries both the encrypted image and the encoded key is delivered to the receiver. Based on such a method, the receiver can have an acceptable result and secure transmission can be guaranteed by the RSA cipher system.
Simultaneous transmission for an encrypted image and a double random-phase encryption key.
Yuan, Sheng; Zhou, Xin; Li, Da-hai; Zhou, Ding-fu
2007-06-20
We propose a method to simultaneously transmit double random-phase encryption key and an encrypted image by making use of the fact that an acceptable decryption result can be obtained when only partial data of the encrypted image have been taken in the decryption process. First, the original image data are encoded as an encrypted image by a double random-phase encryption technique. Second, a double random-phase encryption key is encoded as an encoded key by the Rivest-Shamir-Adelman (RSA) public-key encryption algorithm. Then the amplitude of the encrypted image is modulated by the encoded key to form what we call an encoded image. Finally, the encoded image that carries both the encrypted image and the encoded key is delivered to the receiver. Based on such a method, the receiver can have an acceptable result and secure transmission can be guaranteed by the RSA cipher system.
Backwards compatible high dynamic range video compression
NASA Astrophysics Data System (ADS)
Dolzhenko, Vladimir; Chesnokov, Vyacheslav; Edirisinghe, Eran A.
2014-02-01
This paper presents a two layer CODEC architecture for high dynamic range video compression. The base layer contains the tone mapped video stream encoded with 8 bits per component which can be decoded using conventional equipment. The base layer content is optimized for rendering on low dynamic range displays. The enhancement layer contains the image difference, in perceptually uniform color space, between the result of inverse tone mapped base layer content and the original video stream. Prediction of the high dynamic range content reduces the redundancy in the transmitted data while still preserves highlights and out-of-gamut colors. Perceptually uniform colorspace enables using standard ratedistortion optimization algorithms. We present techniques for efficient implementation and encoding of non-uniform tone mapping operators with low overhead in terms of bitstream size and number of operations. The transform representation is based on human vision system model and suitable for global and local tone mapping operators. The compression techniques include predicting the transform parameters from previously decoded frames and from already decoded data for current frame. Different video compression techniques are compared: backwards compatible and non-backwards compatible using AVC and HEVC codecs.
Frank N. Martin; Paul W. Tooley
2006-01-01
Molecular techniques have been developed for detection and identification of P. ramorum and other Phytophthora species that are based on the mitochondrially encoded sequences. One technique uses a Phytophthora genus specific primer to determine if a Phytophthora species is present, followed by...
Hiding Techniques for Dynamic Encryption Text based on Corner Point
NASA Astrophysics Data System (ADS)
Abdullatif, Firas A.; Abdullatif, Alaa A.; al-Saffar, Amna
2018-05-01
Hiding technique for dynamic encryption text using encoding table and symmetric encryption method (AES algorithm) is presented in this paper. The encoding table is generated dynamically from MSB of the cover image points that used as the first phase of encryption. The Harris corner point algorithm is applied on cover image to generate the corner points which are used to generate dynamic AES key to second phase of text encryption. The embedded process in the LSB for the image pixels except the Harris corner points for more robust. Experimental results have demonstrated that the proposed scheme have embedding quality, error-free text recovery, and high value in PSNR.
A seismic data compression system using subband coding
NASA Technical Reports Server (NTRS)
Kiely, A. B.; Pollara, F.
1995-01-01
This article presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The algorithm includes three stages: a decorrelation stage, a quantization stage that introduces a controlled amount of distortion to allow for high compression ratios, and a lossless entropy coding stage based on a simple but efficient arithmetic coding method. Subband coding methods are particularly suited to the decorrelation of nonstationary processes such as seismic events. Adaptivity to the nonstationary behavior of the waveform is achieved by dividing the data into separate blocks that are encoded separately with an adaptive arithmetic encoder. This is done with high efficiency due to the low overhead introduced by the arithmetic encoder in specifying its parameters. The technique could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.
Key management of the double random-phase-encoding method using public-key encryption
NASA Astrophysics Data System (ADS)
Saini, Nirmala; Sinha, Aloka
2010-03-01
Public-key encryption has been used to encode the key of the encryption process. In the proposed technique, an input image has been encrypted by using the double random-phase-encoding method using extended fractional Fourier transform. The key of the encryption process have been encoded by using the Rivest-Shamir-Adelman (RSA) public-key encryption algorithm. The encoded key has then been transmitted to the receiver side along with the encrypted image. In the decryption process, first the encoded key has been decrypted using the secret key and then the encrypted image has been decrypted by using the retrieved key parameters. The proposed technique has advantage over double random-phase-encoding method because the problem associated with the transmission of the key has been eliminated by using public-key encryption. Computer simulation has been carried out to validate the proposed technique.
Efficient low-bit-rate adaptive mesh-based motion compensation technique
NASA Astrophysics Data System (ADS)
Mahmoud, Hanan A.; Bayoumi, Magdy A.
2001-08-01
This paper proposes a two-stage global motion estimation method using a novel quadtree block-based motion estimation technique and an active mesh model. In the first stage, motion parameters are estimated by fitting block-based motion vectors computed using a new efficient quadtree technique, that divides a frame into equilateral triangle blocks using the quad-tree structure. Arbitrary partition shapes are achieved by allowing 4-to-1, 3-to-1 and 2-1 merge/combine of sibling blocks having the same motion vector . In the second stage, the mesh is constructed using an adaptive triangulation procedure that places more triangles over areas with high motion content, these areas are estimated during the first stage. finally the motion compensation is achieved by using a novel algorithm that is carried by both the encoder and the decoder to determine the optimal triangulation of the resultant partitions followed by affine mapping at the encoder. Computer simulation results show that the proposed method gives better performance that the conventional ones in terms of the peak signal-to-noise ration (PSNR) and the compression ratio (CR).
Compression technique for large statistical data bases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eggers, S.J.; Olken, F.; Shoshani, A.
1981-03-01
The compression of large statistical databases is explored and are proposed for organizing the compressed data, such that the time required to access the data is logarithmic. The techniques exploit special characteristics of statistical databases, namely, variation in the space required for the natural encoding of integer attributes, a prevalence of a few repeating values or constants, and the clustering of both data of the same length and constants in long, separate series. The techniques are variations of run-length encoding, in which modified run-lengths for the series are extracted from the data stream and stored in a header, which ismore » used to form the base level of a B-tree index into the database. The run-lengths are cumulative, and therefore the access time of the data is logarithmic in the size of the header. The details of the compression scheme and its implementation are discussed, several special cases are presented, and an analysis is given of the relative performance of the various versions.« less
Intra-Operative Dosimetry in Prostate Brachytherapy
2006-11-01
available in most hospitals do not have encoded rotational joints , so one never knows where the fluoro shots are coming from relative to one another...seed matching. CT and MRI based techniques49,50 were also proposed, but cannot be used intraoperatively, and have poor resolution in the axial ...have encoded rotational joints , so one never knows where the fluoro shots are coming from relative to one another. We have addressed this issue by
Efficient processing of MPEG-21 metadata in the binary domain
NASA Astrophysics Data System (ADS)
Timmerer, Christian; Frank, Thomas; Hellwagner, Hermann; Heuer, Jörg; Hutter, Andreas
2005-10-01
XML-based metadata is widely adopted across the different communities and plenty of commercial and open source tools for processing and transforming are available on the market. However, all of these tools have one thing in common: they operate on plain text encoded metadata which may become a burden in constrained and streaming environments, i.e., when metadata needs to be processed together with multimedia content on the fly. In this paper we present an efficient approach for transforming such kind of metadata which are encoded using MPEG's Binary Format for Metadata (BiM) without additional en-/decoding overheads, i.e., within the binary domain. Therefore, we have developed an event-based push parser for BiM encoded metadata which transforms the metadata by a limited set of processing instructions - based on traditional XML transformation techniques - operating on bit patterns instead of cost-intensive string comparisons.
A novel bit-wise adaptable entropy coding technique
NASA Technical Reports Server (NTRS)
Kiely, A.; Klimesh, M.
2001-01-01
We present a novel entropy coding technique which is adaptable in that each bit to be encoded may have an associated probability esitmate which depends on previously encoded bits. The technique may have advantages over arithmetic coding. The technique can achieve arbitrarily small redundancy and admits a simple and fast decoder.
Frequency-domain elastic full waveform inversion using encoded simultaneous sources
NASA Astrophysics Data System (ADS)
Jeong, W.; Son, W.; Pyun, S.; Min, D.
2011-12-01
Currently, numerous studies have endeavored to develop robust full waveform inversion and migration algorithms. These processes require enormous computational costs, because of the number of sources in the survey. To avoid this problem, the phase encoding technique for prestack migration was proposed by Romero (2000) and Krebs et al. (2009) proposed the encoded simultaneous-source inversion technique in the time domain. On the other hand, Ben-Hadj-Ali et al. (2011) demonstrated the robustness of the frequency-domain full waveform inversion with simultaneous sources for noisy data changing the source assembling. Although several studies on simultaneous-source inversion tried to estimate P- wave velocity based on the acoustic wave equation, seismic migration and waveform inversion based on the elastic wave equations are required to obtain more reliable subsurface information. In this study, we propose a 2-D frequency-domain elastic full waveform inversion technique using phase encoding methods. In our algorithm, the random phase encoding method is employed to calculate the gradients of the elastic parameters, source signature estimation and the diagonal entries of approximate Hessian matrix. The crosstalk for the estimated source signature and the diagonal entries of approximate Hessian matrix are suppressed with iteration as for the gradients. Our 2-D frequency-domain elastic waveform inversion algorithm is composed using the back-propagation technique and the conjugate-gradient method. Source signature is estimated using the full Newton method. We compare the simultaneous-source inversion with the conventional waveform inversion for synthetic data sets of the Marmousi-2 model. The inverted results obtained by simultaneous sources are comparable to those obtained by individual sources, and source signature is successfully estimated in simultaneous source technique. Comparing the inverted results using the pseudo Hessian matrix with previous inversion results provided by the approximate Hessian matrix, it is noted that the latter are better than the former for deeper parts of the model. This work was financially supported by the Brain Korea 21 project of Energy System Engineering, by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2010-0006155), by the Energy Efficiency & Resources of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) grant funded by the Korea government Ministry of Knowledge Economy (No. 2010T100200133).
Encoding techniques for complex information structures in connectionist systems
NASA Technical Reports Server (NTRS)
Barnden, John; Srinivas, Kankanahalli
1990-01-01
Two general information encoding techniques called relative position encoding and pattern similarity association are presented. They are claimed to be a convenient basis for the connectionist implementation of complex, short term information processing of the sort needed in common sense reasoning, semantic/pragmatic interpretation of natural language utterances, and other types of high level cognitive processing. The relationships of the techniques to other connectionist information-structuring methods, and also to methods used in computers, are discussed in detail. The rich inter-relationships of these other connectionist and computer methods are also clarified. The particular, simple forms are discussed that the relative position encoding and pattern similarity association techniques take in the author's own connectionist system, called Conposit, in order to clarify some issues and to provide evidence that the techniques are indeed useful in practice.
An adaptable binary entropy coder
NASA Technical Reports Server (NTRS)
Kiely, A.; Klimesh, M.
2001-01-01
We present a novel entropy coding technique which is based on recursive interleaving of variable-to-variable length binary source codes. We discuss code design and performance estimation methods, as well as practical encoding and decoding algorithms.
NASA Astrophysics Data System (ADS)
Brazhnik, Kristina; Grinevich, Regina; Efimov, Anton E.; Nabiev, Igor; Sukhanova, Alyona
2014-05-01
Advanced multiplexed assays have recently become an indispensable tool for clinical diagnostics. These techniques provide simultaneous quantitative determination of multiple biomolecules in a single sample quickly and accurately. The development of multiplex suspension arrays is currently of particular interest for clinical applications. Optical encoding of microparticles is the most available and easy-to-use technique. This technology uses fluorophores incorporated into microbeads to obtain individual optical codes. Fluorophore-encoded beads can be rapidly analyzed using classical flow cytometry or microfluidic techniques. We have developed a new generation of highly sensitive and specific diagnostic systems for detection of cancer antigens in human serum samples based on microbeads encoded with fluorescent quantum dots (QDs). The designed suspension microarray system was validated for quantitative detection of (1) free and total prostate specific antigen (PSA) in the serum of patients with prostate cancer and (2) carcinoembryonic antigen (CEA) and cancer antigen 15-3 (CA 15-3) in the serum of patients with breast cancer. The serum samples from healthy donors were used as a control. The antigen detection is based on the formation of an immune complex of a specific capture antibody (Ab), a target antigen (Ag), and a detector Ab on the surface of the encoded particles. The capture Ab is bound to the polymer shell of microbeads via an adapter molecule, for example, protein A. Protein A binds a monoclonal Ab in a highly oriented manner due to specific interaction with the Fc-region of the Ab molecule. Each antigen can be recognized and detected due to a specific microbead population carrying the unique fluorescent code. 100 and 231 serum samples from patients with different stages of prostate cancer and breast cancer, respectively, and those from healthy donors were examined using the designed suspension system. The data were validated by comparing with the results of the "gold standard" enzyme-linked immunosorbent assay (ELISA). They have shown that our approach is a good alternative to the diagnostics of cancer markers using conventional assays, especially in early diagnostic applications.
Makeyev, E V; Kolb, V A; Spirin, A S
1999-02-12
A novel cloning-independent strategy has been developed to generate a combinatorial library of PCR fragments encoding a murine single-chain antibody repertoire and express it directly in a cell-free system. The new approach provides an effective alternative to the techniques involving in vivo procedures of preparation and handling large libraries of antibodies. The possible use of the described strategy in the ribosome display is discussed.
Spherical hashing: binary code embedding with hyperspheres.
Heo, Jae-Pil; Lee, Youngwoon; He, Junfeng; Chang, Shih-Fu; Yoon, Sung-Eui
2015-11-01
Many binary code embedding schemes have been actively studied recently, since they can provide efficient similarity search, and compact data representations suitable for handling large scale image databases. Existing binary code embedding techniques encode high-dimensional data by using hyperplane-based hashing functions. In this paper we propose a novel hypersphere-based hashing function, spherical hashing, to map more spatially coherent data points into a binary code compared to hyperplane-based hashing functions. We also propose a new binary code distance function, spherical Hamming distance, tailored for our hypersphere-based binary coding scheme, and design an efficient iterative optimization process to achieve both balanced partitioning for each hash function and independence between hashing functions. Furthermore, we generalize spherical hashing to support various similarity measures defined by kernel functions. Our extensive experiments show that our spherical hashing technique significantly outperforms state-of-the-art techniques based on hyperplanes across various benchmarks with sizes ranging from one to 75 million of GIST, BoW and VLAD descriptors. The performance gains are consistent and large, up to 100 percent improvements over the second best method among tested methods. These results confirm the unique merits of using hyperspheres to encode proximity regions in high-dimensional spaces. Finally, our method is intuitive and easy to implement.
Sparse alignment for robust tensor learning.
Lai, Zhihui; Wong, Wai Keung; Xu, Yong; Zhao, Cairong; Sun, Mingming
2014-10-01
Multilinear/tensor extensions of manifold learning based algorithms have been widely used in computer vision and pattern recognition. This paper first provides a systematic analysis of the multilinear extensions for the most popular methods by using alignment techniques, thereby obtaining a general tensor alignment framework. From this framework, it is easy to show that the manifold learning based tensor learning methods are intrinsically different from the alignment techniques. Based on the alignment framework, a robust tensor learning method called sparse tensor alignment (STA) is then proposed for unsupervised tensor feature extraction. Different from the existing tensor learning methods, L1- and L2-norms are introduced to enhance the robustness in the alignment step of the STA. The advantage of the proposed technique is that the difficulty in selecting the size of the local neighborhood can be avoided in the manifold learning based tensor feature extraction algorithms. Although STA is an unsupervised learning method, the sparsity encodes the discriminative information in the alignment step and provides the robustness of STA. Extensive experiments on the well-known image databases as well as action and hand gesture databases by encoding object images as tensors demonstrate that the proposed STA algorithm gives the most competitive performance when compared with the tensor-based unsupervised learning methods.
Laser Light-field Fusion for Wide-field Lensfree On-chip Phase Contrast Microscopy of Nanoparticles
NASA Astrophysics Data System (ADS)
Kazemzadeh, Farnoud; Wong, Alexander
2016-12-01
Wide-field lensfree on-chip microscopy, which leverages holography principles to capture interferometric light-field encodings without lenses, is an emerging imaging modality with widespread interest given the large field-of-view compared to lens-based techniques. In this study, we introduce the idea of laser light-field fusion for lensfree on-chip phase contrast microscopy for detecting nanoparticles, where interferometric laser light-field encodings acquired using a lensfree, on-chip setup with laser pulsations at different wavelengths are fused to produce marker-free phase contrast images of particles at the nanometer scale. As a proof of concept, we demonstrate, for the first time, a wide-field lensfree on-chip instrument successfully detecting 300 nm particles across a large field-of-view of ~30 mm2 without any specialized or intricate sample preparation, or the use of synthetic aperture- or shift-based techniques.
Laser Light-field Fusion for Wide-field Lensfree On-chip Phase Contrast Microscopy of Nanoparticles.
Kazemzadeh, Farnoud; Wong, Alexander
2016-12-13
Wide-field lensfree on-chip microscopy, which leverages holography principles to capture interferometric light-field encodings without lenses, is an emerging imaging modality with widespread interest given the large field-of-view compared to lens-based techniques. In this study, we introduce the idea of laser light-field fusion for lensfree on-chip phase contrast microscopy for detecting nanoparticles, where interferometric laser light-field encodings acquired using a lensfree, on-chip setup with laser pulsations at different wavelengths are fused to produce marker-free phase contrast images of particles at the nanometer scale. As a proof of concept, we demonstrate, for the first time, a wide-field lensfree on-chip instrument successfully detecting 300 nm particles across a large field-of-view of ~30 mm 2 without any specialized or intricate sample preparation, or the use of synthetic aperture- or shift-based techniques.
USDA-ARS?s Scientific Manuscript database
The molecular biological techniques for plasmid-based assembly and cloning of synthetic assembled gene open reading frames are essential for elucidating the function of the proteins encoded by the genes. These techniques involve the production of full-length cDNA libraries as a source of plasmid-bas...
SnoVault and encodeD: A novel object-based storage system and applications to ENCODE metadata.
Hitz, Benjamin C; Rowe, Laurence D; Podduturi, Nikhil R; Glick, David I; Baymuradov, Ulugbek K; Malladi, Venkat S; Chan, Esther T; Davidson, Jean M; Gabdank, Idan; Narayana, Aditi K; Onate, Kathrina C; Hilton, Jason; Ho, Marcus C; Lee, Brian T; Miyasato, Stuart R; Dreszer, Timothy R; Sloan, Cricket A; Strattan, J Seth; Tanaka, Forrest Y; Hong, Eurie L; Cherry, J Michael
2017-01-01
The Encyclopedia of DNA elements (ENCODE) project is an ongoing collaborative effort to create a comprehensive catalog of functional elements initiated shortly after the completion of the Human Genome Project. The current database exceeds 6500 experiments across more than 450 cell lines and tissues using a wide array of experimental techniques to study the chromatin structure, regulatory and transcriptional landscape of the H. sapiens and M. musculus genomes. All ENCODE experimental data, metadata, and associated computational analyses are submitted to the ENCODE Data Coordination Center (DCC) for validation, tracking, storage, unified processing, and distribution to community resources and the scientific community. As the volume of data increases, the identification and organization of experimental details becomes increasingly intricate and demands careful curation. The ENCODE DCC has created a general purpose software system, known as SnoVault, that supports metadata and file submission, a database used for metadata storage, web pages for displaying the metadata and a robust API for querying the metadata. The software is fully open-source, code and installation instructions can be found at: http://github.com/ENCODE-DCC/snovault/ (for the generic database) and http://github.com/ENCODE-DCC/encoded/ to store genomic data in the manner of ENCODE. The core database engine, SnoVault (which is completely independent of ENCODE, genomic data, or bioinformatic data) has been released as a separate Python package.
SnoVault and encodeD: A novel object-based storage system and applications to ENCODE metadata
Podduturi, Nikhil R.; Glick, David I.; Baymuradov, Ulugbek K.; Malladi, Venkat S.; Chan, Esther T.; Davidson, Jean M.; Gabdank, Idan; Narayana, Aditi K.; Onate, Kathrina C.; Hilton, Jason; Ho, Marcus C.; Lee, Brian T.; Miyasato, Stuart R.; Dreszer, Timothy R.; Sloan, Cricket A.; Strattan, J. Seth; Tanaka, Forrest Y.; Hong, Eurie L.; Cherry, J. Michael
2017-01-01
The Encyclopedia of DNA elements (ENCODE) project is an ongoing collaborative effort to create a comprehensive catalog of functional elements initiated shortly after the completion of the Human Genome Project. The current database exceeds 6500 experiments across more than 450 cell lines and tissues using a wide array of experimental techniques to study the chromatin structure, regulatory and transcriptional landscape of the H. sapiens and M. musculus genomes. All ENCODE experimental data, metadata, and associated computational analyses are submitted to the ENCODE Data Coordination Center (DCC) for validation, tracking, storage, unified processing, and distribution to community resources and the scientific community. As the volume of data increases, the identification and organization of experimental details becomes increasingly intricate and demands careful curation. The ENCODE DCC has created a general purpose software system, known as SnoVault, that supports metadata and file submission, a database used for metadata storage, web pages for displaying the metadata and a robust API for querying the metadata. The software is fully open-source, code and installation instructions can be found at: http://github.com/ENCODE-DCC/snovault/ (for the generic database) and http://github.com/ENCODE-DCC/encoded/ to store genomic data in the manner of ENCODE. The core database engine, SnoVault (which is completely independent of ENCODE, genomic data, or bioinformatic data) has been released as a separate Python package. PMID:28403240
Unconditionally secure multi-party quantum commitment scheme
NASA Astrophysics Data System (ADS)
Wang, Ming-Qiang; Wang, Xue; Zhan, Tao
2018-02-01
A new unconditionally secure multi-party quantum commitment is proposed in this paper by encoding the committed message to the phase of a quantum state. Multi-party means that there are more than one recipient in our scheme. We show that our quantum commitment scheme is unconditional hiding and binding, and hiding is perfect. Our technique is based on the interference of phase-encoded coherent states of light. Its security proof relies on the no-cloning theorem of quantum theory and the properties of quantum information.
Meher, J K; Meher, P K; Dash, G N; Raval, M K
2012-01-01
The first step in gene identification problem based on genomic signal processing is to convert character strings into numerical sequences. These numerical sequences are then analysed spectrally or using digital filtering techniques for the period-3 peaks, which are present in exons (coding areas) and absent in introns (non-coding areas). In this paper, we have shown that single-indicator sequences can be generated by encoding schemes based on physico-chemical properties. Two new methods are proposed for generating single-indicator sequences based on hydration energy and dipole moments. The proposed methods produce high peak at exon locations and effectively suppress false exons (intron regions having greater peak than exon regions) resulting in high discriminating factor, sensitivity and specificity.
Optical delay encoding for fast timing and detector signal multiplexing in PET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grant, Alexander M.; Levin, Craig S., E-mail: cslevin@stanford.edu; Molecular Imaging Program at Stanford
2015-08-15
Purpose: The large number of detector channels in modern positron emission tomography (PET) scanners poses a challenge in terms of readout electronics complexity. Multiplexing schemes are typically implemented to reduce the number of physical readout channels, but often result in performance degradation. Novel methods of multiplexing in PET must be developed to avoid this data degradation. The preservation of fast timing information is especially important for time-of-flight PET. Methods: A new multiplexing scheme based on encoding detector interaction events with a series of extremely fast overlapping optical pulses with precise delays is demonstrated in this work. Encoding events in thismore » way potentially allows many detector channels to be simultaneously encoded onto a single optical fiber that is then read out by a single digitizer. A two channel silicon photomultiplier-based prototype utilizing this optical delay encoding technique along with dual threshold time-over-threshold is demonstrated. Results: The optical encoding and multiplexing prototype achieves a coincidence time resolution of 160 ps full width at half maximum (FWHM) and an energy resolution of 13.1% FWHM at 511 keV with 3 × 3 × 5 mm{sup 3} LYSO crystals. All interaction information for both detectors, including timing, energy, and channel identification, is encoded onto a single optical fiber with little degradation. Conclusions: Optical delay encoding and multiplexing technology could lead to time-of-flight PET scanners with fewer readout channels and simplified data acquisition systems.« less
Composite Bloom Filters for Secure Record Linkage.
Durham, Elizabeth Ashley; Kantarcioglu, Murat; Xue, Yuan; Toth, Csaba; Kuzu, Mehmet; Malin, Bradley
2014-12-01
The process of record linkage seeks to integrate instances that correspond to the same entity. Record linkage has traditionally been performed through the comparison of identifying field values ( e.g., Surname ), however, when databases are maintained by disparate organizations, the disclosure of such information can breach the privacy of the corresponding individuals. Various private record linkage (PRL) methods have been developed to obscure such identifiers, but they vary widely in their ability to balance competing goals of accuracy, efficiency and security. The tokenization and hashing of field values into Bloom filters (BF) enables greater linkage accuracy and efficiency than other PRL methods, but the encodings may be compromised through frequency-based cryptanalysis. Our objective is to adapt a BF encoding technique to mitigate such attacks with minimal sacrifices in accuracy and efficiency. To accomplish these goals, we introduce a statistically-informed method to generate BF encodings that integrate bits from multiple fields, the frequencies of which are provably associated with a minimum number of fields. Our method enables a user-specified tradeoff between security and accuracy. We compare our encoding method with other techniques using a public dataset of voter registration records and demonstrate that the increases in security come with only minor losses to accuracy.
Composite Bloom Filters for Secure Record Linkage
Durham, Elizabeth Ashley; Kantarcioglu, Murat; Xue, Yuan; Toth, Csaba; Kuzu, Mehmet; Malin, Bradley
2014-01-01
The process of record linkage seeks to integrate instances that correspond to the same entity. Record linkage has traditionally been performed through the comparison of identifying field values (e.g., Surname), however, when databases are maintained by disparate organizations, the disclosure of such information can breach the privacy of the corresponding individuals. Various private record linkage (PRL) methods have been developed to obscure such identifiers, but they vary widely in their ability to balance competing goals of accuracy, efficiency and security. The tokenization and hashing of field values into Bloom filters (BF) enables greater linkage accuracy and efficiency than other PRL methods, but the encodings may be compromised through frequency-based cryptanalysis. Our objective is to adapt a BF encoding technique to mitigate such attacks with minimal sacrifices in accuracy and efficiency. To accomplish these goals, we introduce a statistically-informed method to generate BF encodings that integrate bits from multiple fields, the frequencies of which are provably associated with a minimum number of fields. Our method enables a user-specified tradeoff between security and accuracy. We compare our encoding method with other techniques using a public dataset of voter registration records and demonstrate that the increases in security come with only minor losses to accuracy. PMID:25530689
Using Self-Generated Cues to Facilitate Recall: A Narrative Review
Wheeler, Rebecca L.; Gabbert, Fiona
2017-01-01
We draw upon the Associative Network model of memory, as well as the principles of encoding-retrieval specificity, and cue distinctiveness, to argue that self-generated cue mnemonics offer an intuitive means of facilitating reliable recall of personally experienced events. The use of a self-generated cue mnemonic allows for the spreading activation nature of memory, whilst also presenting an opportunity to capitalize upon cue distinctiveness. Here, we present the theoretical rationale behind the use of this technique, and highlight the distinction between a self-generated cue and a self-referent cue in autobiographical memory research. We contrast this mnemonic with a similar retrieval technique, Mental Reinstatement of Context, which is recognized as the most effective mnemonic component of the Cognitive Interview. Mental Reinstatement of Context is based upon the principle of encoding-retrieval specificity, whereby the overlap between encoded information and retrieval cue predicts the likelihood of accurate recall. However, it does not incorporate the potential additional benefit of self-generated retrieval cues. PMID:29163254
Space vehicle onboard command encoder
NASA Technical Reports Server (NTRS)
1975-01-01
A flexible onboard encoder system was designed for the space shuttle. The following areas were covered: (1) implementation of the encoder design into hardware to demonstrate the various encoding algorithms/code formats, (2) modulation techniques in a single hardware package to maintain comparable reliability and link integrity of the existing link systems and to integrate the various techniques into a single design using current technology. The primary function of the command encoder is to accept input commands, generated either locally onboard the space shuttle or remotely from the ground, format and encode the commands in accordance with the payload input requirements and appropriately modulate a subcarrier for transmission by the baseband RF modulator. The following information was provided: command encoder system design, brassboard hardware design, test set hardware and system packaging, and software.
Solving traveling salesman problems with DNA molecules encoding numerical values.
Lee, Ji Youn; Shin, Soo-Yong; Park, Tai Hyun; Zhang, Byoung-Tak
2004-12-01
We introduce a DNA encoding method to represent numerical values and a biased molecular algorithm based on the thermodynamic properties of DNA. DNA strands are designed to encode real values by variation of their melting temperatures. The thermodynamic properties of DNA are used for effective local search of optimal solutions using biochemical techniques, such as denaturation temperature gradient polymerase chain reaction and temperature gradient gel electrophoresis. The proposed method was successfully applied to the traveling salesman problem, an instance of optimization problems on weighted graphs. This work extends the capability of DNA computing to solving numerical optimization problems, which is contrasted with other DNA computing methods focusing on logical problem solving.
Live-cell Imaging with Genetically Encoded Protein Kinase Activity Reporters.
Maryu, Gembu; Miura, Haruko; Uda, Youichi; Komatsubara, Akira T; Matsuda, Michiyuki; Aoki, Kazuhiro
2018-04-25
Protein kinases play pivotal roles in intracellular signal transduction, and dysregulation of kinases leads to pathological results such as malignant tumors. Kinase activity has hitherto been measured by biochemical methods such as in vitro phosphorylation assay and western blotting. However, these methods are less useful to explore spatial and temporal changes in kinase activity and its cell-to-cell variation. Recent advances in fluorescent proteins and live-cell imaging techniques enable us to visualize kinase activity in living cells with high spatial and temporal resolutions. Several genetically encoded kinase activity reporters, which are based on the modes of action of kinase activation and phosphorylation, are currently available. These reporters are classified into single-fluorophore kinase activity reporters and Förster (or fluorescence) resonance energy transfer (FRET)-based kinase activity reporters. Here, we introduce the principles of genetically encoded kinase activity reporters, and discuss the advantages and disadvantages of these reporters.Key words: kinase, FRET, phosphorylation, KTR.
DNA strand displacement system running logic programs.
Rodríguez-Patón, Alfonso; Sainz de Murieta, Iñaki; Sosík, Petr
2014-01-01
The paper presents a DNA-based computing model which is enzyme-free and autonomous, not requiring a human intervention during the computation. The model is able to perform iterated resolution steps with logical formulae in conjunctive normal form. The implementation is based on the technique of DNA strand displacement, with each clause encoded in a separate DNA molecule. Propositions are encoded assigning a strand to each proposition p, and its complementary strand to the proposition ¬p; clauses are encoded comprising different propositions in the same strand. The model allows to run logic programs composed of Horn clauses by cascading resolution steps. The potential of the model is demonstrated also by its theoretical capability of solving SAT. The resulting SAT algorithm has a linear time complexity in the number of resolution steps, whereas its spatial complexity is exponential in the number of variables of the formula. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Multiple descriptions based on multirate coding for JPEG 2000 and H.264/AVC.
Tillo, Tammam; Baccaglini, Enrico; Olmo, Gabriella
2010-07-01
Multiple description coding (MDC) makes use of redundant representations of multimedia data to achieve resiliency. Descriptions should be generated so that the quality obtained when decoding a subset of them only depends on their number and not on the particular received subset. In this paper, we propose a method based on the principle of encoding the source at several rates, and properly blending the data encoded at different rates to generate the descriptions. The aim is to achieve efficient redundancy exploitation, and easy adaptation to different network scenarios by means of fine tuning of the encoder parameters. We apply this principle to both JPEG 2000 images and H.264/AVC video data. We consider as the reference scenario the distribution of contents on application-layer overlays with multiple-tree topology. The experimental results reveal that our method favorably compares with state-of-art MDC techniques.
Layton, Kelvin J; Gallichan, Daniel; Testud, Frederik; Cocosco, Chris A; Welz, Anna M; Barmet, Christoph; Pruessmann, Klaas P; Hennig, Jürgen; Zaitsev, Maxim
2013-09-01
It has recently been demonstrated that nonlinear encoding fields result in a spatially varying resolution. This work develops an automated procedure to design single-shot trajectories that create a local resolution improvement in a region of interest. The technique is based on the design of optimized local k-space trajectories and can be applied to arbitrary hardware configurations that employ any number of linear and nonlinear encoding fields. The trajectories designed in this work are tested with the currently available hardware setup consisting of three standard linear gradients and two quadrupolar encoding fields generated from a custom-built gradient insert. A field camera is used to measure the actual encoding trajectories up to third-order terms, enabling accurate reconstructions of these demanding single-shot trajectories, although the eddy current and concomitant field terms of the gradient insert have not been completely characterized. The local resolution improvement is demonstrated in phantom and in vivo experiments. Copyright © 2012 Wiley Periodicals, Inc.
Raster and vector processing for scanned linework
Greenlee, David D.
1987-01-01
An investigation of raster editing techniques, including thinning, filling, and node detecting, was performed by using specialized software. The techniques were based on encoding the state of the 3-by-3 neighborhood surrounding each pixel into a single byte. A prototypical method for converting the edited raster linkwork into vectors was also developed. Once vector representations of the lines were formed, they were formatted as a Digital Line Graph, and further refined by deletion of nonessential vertices and by smoothing with a curve-fitting technique.
Kim, Dong-Sun; Kwon, Jin-San
2014-01-01
Research on real-time health systems have received great attention during recent years and the needs of high-quality personal multichannel medical signal compression for personal medical product applications are increasing. The international MPEG-4 audio lossless coding (ALS) standard supports a joint channel-coding scheme for improving compression performance of multichannel signals and it is very efficient compression method for multi-channel biosignals. However, the computational complexity of such a multichannel coding scheme is significantly greater than that of other lossless audio encoders. In this paper, we present a multichannel hardware encoder based on a low-complexity joint-coding technique and shared multiplier scheme for portable devices. A joint-coding decision method and a reference channel selection scheme are modified for a low-complexity joint coder. The proposed joint coding decision method determines the optimized joint-coding operation based on the relationship between the cross correlation of residual signals and the compression ratio. The reference channel selection is designed to select a channel for the entropy coding of the joint coding. The hardware encoder operates at a 40 MHz clock frequency and supports two-channel parallel encoding for the multichannel monitoring system. Experimental results show that the compression ratio increases by 0.06%, whereas the computational complexity decreases by 20.72% compared to the MPEG-4 ALS reference software encoder. In addition, the compression ratio increases by about 11.92%, compared to the single channel based bio-signal lossless data compressor. PMID:25237900
Transfer Function Bounds for Partial-unit-memory Convolutional Codes Based on Reduced State Diagram
NASA Technical Reports Server (NTRS)
Lee, P. J.
1984-01-01
The performance of a coding system consisting of a convolutional encoder and a Viterbi decoder is analytically found by the well-known transfer function bounding technique. For the partial-unit-memory byte-oriented convolutional encoder with m sub 0 binary memory cells and (k sub 0 m sub 0) inputs, a state diagram of 2(K) (sub 0) was for the transfer function bound. A reduced state diagram of (2 (m sub 0) +1) is used for easy evaluation of transfer function bounds for partial-unit-memory codes.
Estimation of TOA based MUSIC algorithm and cross correlation algorithm of appropriate interval
NASA Astrophysics Data System (ADS)
Lin, Wei; Liu, Jun; Zhou, Yineng; Huang, Jiyan
2017-03-01
Localization of mobile station (MS) has now gained considerable attention due to its wide applications in military, environmental, health and commercial systems. Phrase angle and encode data of MSK system model are two critical parameters in time-of-arrival (TOA) localization technique; nevertheless, precise value of phrase angle and encode data are not easy to achieved in general. In order to meet the actual situation, we should consider the condition that phase angle and encode data is unknown. In this paper, a novel TOA localization method, which combine MUSIC algorithm and cross correlation algorithm in an appropriate interval, is proposed. Simulations show that the proposed method has better performance than music algorithm and cross correlation algorithm of the whole interval.
NASA Astrophysics Data System (ADS)
Tang, Li-Chuan; Hu, Guang W.; Russell, Kendra L.; Chang, Chen S.; Chang, Chi Ching
2000-10-01
We propose a new holographic memory scheme based on random phase-encoded multiplexing in a photorefractive LiNbO3:Fe crystal. Experimental results show that rotating a diffuser placed as a random phase modulator in the path of the reference beam provides a simple yet effective method of increasing the holographic storage capabilities of the crystal. Combining this rotational multiplexing with angular multiplexing offers further advantages. Storage capabilities can be optimized by using a post-image random phase plate in the path of the object beam. The technique is applied to a triple phase-encoded optical security system that takes advantage of the high angular selectivity of the angular-rotational multiplexing components.
Dynamic frame resizing with convolutional neural network for efficient video compression
NASA Astrophysics Data System (ADS)
Kim, Jaehwan; Park, Youngo; Choi, Kwang Pyo; Lee, JongSeok; Jeon, Sunyoung; Park, JeongHoon
2017-09-01
In the past, video codecs such as vc-1 and H.263 used a technique to encode reduced-resolution video and restore original resolution from the decoder for improvement of coding efficiency. The techniques of vc-1 and H.263 Annex Q are called dynamic frame resizing and reduced-resolution update mode, respectively. However, these techniques have not been widely used due to limited performance improvements that operate well only under specific conditions. In this paper, video frame resizing (reduced/restore) technique based on machine learning is proposed for improvement of coding efficiency. The proposed method features video of low resolution made by convolutional neural network (CNN) in encoder and reconstruction of original resolution using CNN in decoder. The proposed method shows improved subjective performance over all the high resolution videos which are dominantly consumed recently. In order to assess subjective quality of the proposed method, Video Multi-method Assessment Fusion (VMAF) which showed high reliability among many subjective measurement tools was used as subjective metric. Moreover, to assess general performance, diverse bitrates are tested. Experimental results showed that BD-rate based on VMAF was improved by about 51% compare to conventional HEVC. Especially, VMAF values were significantly improved in low bitrate. Also, when the method is subjectively tested, it had better subjective visual quality in similar bit rate.
Bandwidth compression of color video signals. Ph.D. Thesis Final Report, 1 Oct. 1979 - 30 Sep. 1980
NASA Technical Reports Server (NTRS)
Schilling, D. L.
1980-01-01
The different encoder/decoder strategies to digitally encode video using an adaptive delta modulation are described. The techniques employed are: (1) separately encoding the R, G, and B components; (2) separately encoding the I, Y, and Q components; and (3) encoding the picture in a line sequential manner.
Pinal, Diego; Zurrón, Montserrat; Díaz, Fernando
2014-01-01
information encoding, maintenance, and retrieval; these are supported by brain activity in a network of frontal, parietal and temporal regions. Manipulation of WM load and duration of the maintenance period can modulate this activity. Although such modulations have been widely studied using the event-related potentials (ERP) technique, a precise description of the time course of brain activity during encoding and retrieval is still required. Here, we used this technique and principal component analysis to assess the time course of brain activity during encoding and retrieval in a delayed match to sample task. We also investigated the effects of memory load and duration of the maintenance period on ERP activity. Brain activity was similar during information encoding and retrieval and comprised six temporal factors, which closely matched the latency and scalp distribution of some ERP components: P1, N1, P2, N2, P300, and a slow wave. Changes in memory load modulated task performance and yielded variations in frontal lobe activation. Moreover, the P300 amplitude was smaller in the high than in the low load condition during encoding and retrieval. Conversely, the slow wave amplitude was higher in the high than in the low load condition during encoding, and the same was true for the N2 amplitude during retrieval. Thus, during encoding, memory load appears to modulate the processing resources for context updating and post-categorization processes, and during retrieval it modulates resources for stimulus classification and context updating. Besides, despite the lack of differences in task performance related to duration of the maintenance period, larger N2 amplitude and stronger activation of the left temporal lobe after long than after short maintenance periods were found during information retrieval. Thus, results regarding the duration of maintenance period were complex, and future work is required to test the time-based decay theory predictions.
Datta, Asit K; Munshi, Soumika
2002-03-10
Based on the negabinary number representation, parallel one-step arithmetic operations (that is, addition and subtraction), logical operations, and matrix-vector multiplication on data have been optically implemented, by use of a two-dimensional spatial-encoding technique. For addition and subtraction, one of the operands in decimal form is converted into the unsigned negabinary form, whereas the other decimal number is represented in the signed negabinary form. The result of operation is obtained in the mixed negabinary form and is converted back into decimal. Matrix-vector multiplication for unsigned negabinary numbers is achieved through the convolution technique. Both of the operands for logical operation are converted to their signed negabinary forms. All operations are implemented by use of a unique optical architecture. The use of a single liquid-crystal-display panel to spatially encode the input data, operational kernels, and decoding masks have simplified the architecture as well as reduced the cost and complexity.
Permutation coding technique for image recognition systems.
Kussul, Ernst M; Baidyk, Tatiana N; Wunsch, Donald C; Makeyev, Oleksandr; Martín, Anabel
2006-11-01
A feature extractor and neural classifier for image recognition systems are proposed. The proposed feature extractor is based on the concept of random local descriptors (RLDs). It is followed by the encoder that is based on the permutation coding technique that allows to take into account not only detected features but also the position of each feature on the image and to make the recognition process invariant to small displacements. The combination of RLDs and permutation coding permits us to obtain a sufficiently general description of the image to be recognized. The code generated by the encoder is used as an input data for the neural classifier. Different types of images were used to test the proposed image recognition system. It was tested in the handwritten digit recognition problem, the face recognition problem, and the microobject shape recognition problem. The results of testing are very promising. The error rate for the Modified National Institute of Standards and Technology (MNIST) database is 0.44% and for the Olivetti Research Laboratory (ORL) database it is 0.1%.
Ultrasonic Array for Obstacle Detection Based on CDMA with Kasami Codes
Diego, Cristina; Hernández, Álvaro; Jiménez, Ana; Álvarez, Fernando J.; Sanz, Rebeca; Aparicio, Joaquín
2011-01-01
This paper raises the design of an ultrasonic array for obstacle detection based on Phased Array (PA) techniques, which steers the acoustic beam through the environment by electronics rather than mechanical means. The transmission of every element in the array has been encoded, according to Code Division for Multiple Access (CDMA), which allows multiple beams to be transmitted simultaneously. All these features together enable a parallel scanning system which does not only improve the image rate but also achieves longer inspection distances in comparison with conventional PA techniques. PMID:22247675
NASA Astrophysics Data System (ADS)
Liu, Yan; Lai, Puxiang; Ma, Cheng; Xu, Xiao; Suzuki, Yuta; Grabar, Alexander A.; Wang, Lihong V.
2014-03-01
Time-reversed ultrasonically encoded (TRUE) optical focusing is an emerging technique that focuses light deep into scattering media by phase-conjugating ultrasonically encoded diffuse light. In previous work, the speed of TRUE focusing was limited to no faster than 1 Hz by the response time of the photorefractive phase conjugate mirror, or the data acquisition and streaming speed of the digital camera; photorefractive-crystal-based TRUE focusing was also limited to the visible spectral range. These time-consuming schemes prevent this technique from being applied in vivo, since living biological tissue has a speckle decorrelation time on the order of a millisecond. In this work, using a Tedoped Sn2P2S6 photorefractive crystal at a near-infrared wavelength of 793 nm, we achieved TRUE focusing inside dynamic scattering media having a speckle decorrelation time as short as 7.7 ms. As the achieved speed approaches the tissue decorrelation rate, this work is an important step forward toward in vivo applications of TRUE focusing in deep tissue imaging, photodynamic therapy, and optical manipulation.
NASA Astrophysics Data System (ADS)
Chow, Yu Ting; Chen, Shuxun; Wang, Ran; Liu, Chichi; Kong, Chi-Wing; Li, Ronald A.; Cheng, Shuk Han; Sun, Dong
2016-04-01
Cell transfection is a technique wherein foreign genetic molecules are delivered into cells. To elucidate distinct responses during cell genetic modification, methods to achieve transfection at the single-cell level are of great value. Herein, we developed an automated micropipette-based quantitative microinjection technology that can deliver precise amounts of materials into cells. The developed microinjection system achieved precise single-cell microinjection by pre-patterning cells in an array and controlling the amount of substance delivered based on injection pressure and time. The precision of the proposed injection technique was examined by comparing the fluorescence intensities of fluorescent dye droplets with a standard concentration and water droplets with a known injection amount of the dye in oil. Injection of synthetic modified mRNA (modRNA) encoding green fluorescence proteins or a cocktail of plasmids encoding green and red fluorescence proteins into human foreskin fibroblast cells demonstrated that the resulting green fluorescence intensity or green/red fluorescence intensity ratio were well correlated with the amount of genetic material injected into the cells. Single-cell transfection via the developed microinjection technique will be of particular use in cases where cell transfection is challenging and genetically modified of selected cells are desired.
Chow, Yu Ting; Chen, Shuxun; Wang, Ran; Liu, Chichi; Kong, Chi-Wing; Li, Ronald A; Cheng, Shuk Han; Sun, Dong
2016-04-12
Cell transfection is a technique wherein foreign genetic molecules are delivered into cells. To elucidate distinct responses during cell genetic modification, methods to achieve transfection at the single-cell level are of great value. Herein, we developed an automated micropipette-based quantitative microinjection technology that can deliver precise amounts of materials into cells. The developed microinjection system achieved precise single-cell microinjection by pre-patterning cells in an array and controlling the amount of substance delivered based on injection pressure and time. The precision of the proposed injection technique was examined by comparing the fluorescence intensities of fluorescent dye droplets with a standard concentration and water droplets with a known injection amount of the dye in oil. Injection of synthetic modified mRNA (modRNA) encoding green fluorescence proteins or a cocktail of plasmids encoding green and red fluorescence proteins into human foreskin fibroblast cells demonstrated that the resulting green fluorescence intensity or green/red fluorescence intensity ratio were well correlated with the amount of genetic material injected into the cells. Single-cell transfection via the developed microinjection technique will be of particular use in cases where cell transfection is challenging and genetically modified of selected cells are desired.
Space-time encoding for high frame rate ultrasound imaging.
Misaridis, Thanassis X; Jensen, Jørgen A
2002-05-01
Frame rate in ultrasound imaging can be dramatically increased by using sparse synthetic transmit aperture (STA) beamforming techniques. The two main drawbacks of the method are the low signal-to-noise ratio (SNR) and the motion artifacts, that degrade the image quality. In this paper we propose a spatio-temporal encoding for STA imaging based on simultaneous transmission of two quasi-orthogonal tapered linear FM signals. The excitation signals are an up- and a down-chirp with frequency division and a cross-talk of -55 dB. The received signals are first cross-correlated with the appropriate code, then spatially decoded and finally beamformed for each code, yielding two images per emission. The spatial encoding is a Hadamard encoding previously suggested by Chiao et al. [in: Proceedings of the IEEE Ultrasonics Symposium, 1997, p. 1679]. The Hadamard matrix has half the size of the transmit element groups, due to the orthogonality of the temporal encoded wavefronts. Thus, with this method, the frame rate is doubled compared to previous systems. Another advantage is the utilization of temporal codes which are more robust to attenuation. With the proposed technique it is possible to obtain images dynamically focused in both transmit and receive with only two firings. This reduces the problem of motion artifacts. The method has been tested with extensive simulations using Field II. Resolution and SNR are compared with uncoded STA imaging and conventional phased-array imaging. The range resolution remains the same for coded STA imaging with four emissions and is slightly degraded for STA imaging with two emissions due to the -55 dB cross-talk between the signals. The additional proposed temporal encoding adds more than 15 dB on the SNR gain, yielding a SNR at the same order as in phased-array imaging.
High-order multiband encoding in the heart.
Cunningham, Charles H; Wright, Graham A; Wood, Michael L
2002-10-01
Spatial encoding with multiband selective excitation (e.g., Hadamard encoding) has been restricted to a small number of slices because the RF pulse becomes unacceptably long when more than about eight slices are encoded. In this work, techniques to shorten multiband RF pulses, and thus allow larger numbers of slices, are investigated. A method for applying the techniques while retaining the capability of adaptive slice thickness is outlined. A tradeoff between slice thickness and pulse duration is shown. Simulations and experiments with the shortened pulses confirmed that motion-induced excitation profile blurring and phase accrual were reduced. The connection between gradient hardware limitations, slice thickness, and flow sensitivity is shown. Excitation profiles for encoding 32 contiguous slices of 1-mm thickness were measured experimentally, and the artifact resulting from errors in timing of RF pulse relative to gradient was investigated. A multiband technique for imaging 32 contiguous 2-mm slices, with adaptive slice thickness, was developed and demonstrated for coronary artery imaging in healthy subjects. With the ability to image high numbers of contiguous slices, using relatively short (1-2 ms) RF pulses, multiband encoding has been advanced further toward practical application. Copyright 2002 Wiley-Liss, Inc.
Context dependent prediction and category encoding for DPCM image compression
NASA Technical Reports Server (NTRS)
Beaudet, Paul R.
1989-01-01
Efficient compression of image data requires the understanding of the noise characteristics of sensors as well as the redundancy expected in imagery. Herein, the techniques of Differential Pulse Code Modulation (DPCM) are reviewed and modified for information-preserving data compression. The modifications include: mapping from intensity to an equal variance space; context dependent one and two dimensional predictors; rationale for nonlinear DPCM encoding based upon an image quality model; context dependent variable length encoding of 2x2 data blocks; and feedback control for constant output rate systems. Examples are presented at compression rates between 1.3 and 2.8 bits per pixel. The need for larger block sizes, 2D context dependent predictors, and the hope for sub-bits-per-pixel compression which maintains spacial resolution (information preserving) are discussed.
Compression of surface myoelectric signals using MP3 encoding.
Chan, Adrian D C
2011-01-01
The potential of MP3 compression of surface myoelectric signals is explored in this paper. MP3 compression is a perceptual-based encoder scheme, used traditionally to compress audio signals. The ubiquity of MP3 compression (e.g., portable consumer electronics and internet applications) makes it an attractive option for remote monitoring and telemedicine applications. The effects of muscle site and contraction type are examined at different MP3 encoding bitrates. Results demonstrate that MP3 compression is sensitive to the myoelectric signal bandwidth, with larger signal distortion associated with myoelectric signals that have higher bandwidths. Compared to other myoelectric signal compression techniques reported previously (embedded zero-tree wavelet compression and adaptive differential pulse code modulation), MP3 compression demonstrates superior performance (i.e., lower percent residual differences for the same compression ratios).
Visual tracking using neuromorphic asynchronous event-based cameras.
Ni, Zhenjiang; Ieng, Sio-Hoi; Posch, Christoph; Régnier, Stéphane; Benosman, Ryad
2015-04-01
This letter presents a novel computationally efficient and robust pattern tracking method based on a time-encoded, frame-free visual data. Recent interdisciplinary developments, combining inputs from engineering and biology, have yielded a novel type of camera that encodes visual information into a continuous stream of asynchronous, temporal events. These events encode temporal contrast and intensity locally in space and time. We show that the sparse yet accurately timed information is well suited as a computational input for object tracking. In this letter, visual data processing is performed for each incoming event at the time it arrives. The method provides a continuous and iterative estimation of the geometric transformation between the model and the events representing the tracked object. It can handle isometry, similarities, and affine distortions and allows for unprecedented real-time performance at equivalent frame rates in the kilohertz range on a standard PC. Furthermore, by using the dimension of time that is currently underexploited by most artificial vision systems, the method we present is able to solve ambiguous cases of object occlusions that classical frame-based techniques handle poorly.
NASA Technical Reports Server (NTRS)
Lewis, Michael
1994-01-01
Statistical encoding techniques enable the reduction of the number of bits required to encode a set of symbols, and are derived from their probabilities. Huffman encoding is an example of statistical encoding that has been used for error-free data compression. The degree of compression given by Huffman encoding in this application can be improved by the use of prediction methods. These replace the set of elevations by a set of corrections that have a more advantageous probability distribution. In particular, the method of Lagrange Multipliers for minimization of the mean square error has been applied to local geometrical predictors. Using this technique, an 8-point predictor achieved about a 7 percent improvement over an existing simple triangular predictor.
Techniques for video compression
NASA Technical Reports Server (NTRS)
Wu, Chwan-Hwa
1995-01-01
In this report, we present our study on multiprocessor implementation of a MPEG2 encoding algorithm. First, we compare two approaches to implementing video standards, VLSI technology and multiprocessor processing, in terms of design complexity, applications, and cost. Then we evaluate the functional modules of MPEG2 encoding process in terms of their computation time. Two crucial modules are identified based on this evaluation. Then we present our experimental study on the multiprocessor implementation of the two crucial modules. Data partitioning is used for job assignment. Experimental results show that high speedup ratio and good scalability can be achieved by using this kind of job assignment strategy.
Genetically encoded sensors and fluorescence microscopy for anticancer research
NASA Astrophysics Data System (ADS)
Zagaynova, Elena V.; Shirmanova, Marina V.; Sergeeva, Tatiana F.; Klementieva, Natalia V.; Mishin, Alexander S.; Gavrina, Alena I.; Zlobovskay, Olga A.; Furman, Olga E.; Dudenkova, Varvara V.; Perelman, Gregory S.; Lukina, Maria M.; Lukyanov, Konstantin A.
2017-02-01
Early response of cancer cells to chemical compounds and chemotherapeutic drugs were studied using novel fluorescence tools and microscopy techniques. We applied confocal microscopy, two-photon fluorescence lifetime imaging microscopy and super-resolution localization-based microscopy to assess structural and functional changes in cancer cells in vitro. The dynamics of energy metabolism, intracellular pH, caspase-3 activation during staurosporine-induced apoptosis as well as actin cytoskeleton rearrangements under chemotherapy were evaluated. We have showed that new genetically encoded sensors and advanced fluorescence microscopy methods provide an efficient way for multiparameter analysis of cell activities
Native and Nonnative Processing of Japanese Pitch Accent
ERIC Educational Resources Information Center
Wu, Xianghua; Tu, Jung-Yueh; Wang, Yue
2012-01-01
The theoretical framework of this study is based on the prevalent debate of whether prosodic processing is influenced by higher level linguistic-specific circuits or reflects lower level encoding of physical properties. Using the dichotic listening technique, the study investigates the hemispheric processing of Japanese pitch accent by native…
High reliability outdoor sonar prototype based on efficient signal coding.
Alvarez, Fernando J; Ureña, Jesús; Mazo, Manuel; Hernández, Alvaro; García, Juan J; de Marziani, Carlos
2006-10-01
Many mobile robots and autonomous vehicles designed for outdoor operation have incorporated ultrasonic sensors in their navigation systems, whose function is mainly to avoid possible collisions with very close obstacles. The use of these systems in more precise tasks requires signal encoding and the incorporation of pulse compression techniques that have already been used with success in the design of high-performance indoor sonars. However, the transmission of ultrasonic encoded signals outdoors entails a new challenge because of the effects of atmospheric turbulence. This phenomenon causes random fluctuations in the phase and amplitude of traveling acoustic waves, a fact that can make the encoded signal completely unrecognizable by its matched receiver. Atmospheric turbulence is investigated in this work, with the aim of determining the conditions under which it is possible to assure the reliable outdoor operation of an ultrasonic pulse compression system. As a result of this analysis, a novel sonar prototype based on complementary sequences coding is developed and experimentally tested. This encoding scheme provides the system with very useful additional features, namely, high robustness to noise, multi-mode operation capability (simultaneous emissions with minimum cross talk interference), and the possibility of applying an efficient detection algorithm that notably decreases the hardware resource requirements.
Idbeaa, Tarik; Abdul Samad, Salina; Husain, Hafizah
2016-01-01
This paper presents a novel secure and robust steganographic technique in the compressed video domain namely embedding-based byte differencing (EBBD). Unlike most of the current video steganographic techniques which take into account only the intra frames for data embedding, the proposed EBBD technique aims to hide information in both intra and inter frames. The information is embedded into a compressed video by simultaneously manipulating the quantized AC coefficients (AC-QTCs) of luminance components of the frames during MPEG-2 encoding process. Later, during the decoding process, the embedded information can be detected and extracted completely. Furthermore, the EBBD basically deals with two security concepts: data encryption and data concealing. Hence, during the embedding process, secret data is encrypted using the simplified data encryption standard (S-DES) algorithm to provide better security to the implemented system. The security of the method lies in selecting candidate AC-QTCs within each non-overlapping 8 × 8 sub-block using a pseudo random key. Basic performance of this steganographic technique verified through experiments on various existing MPEG-2 encoded videos over a wide range of embedded payload rates. Overall, the experimental results verify the excellent performance of the proposed EBBD with a better trade-off in terms of imperceptibility and payload, as compared with previous techniques while at the same time ensuring minimal bitrate increase and negligible degradation of PSNR values. PMID:26963093
Idbeaa, Tarik; Abdul Samad, Salina; Husain, Hafizah
2016-01-01
This paper presents a novel secure and robust steganographic technique in the compressed video domain namely embedding-based byte differencing (EBBD). Unlike most of the current video steganographic techniques which take into account only the intra frames for data embedding, the proposed EBBD technique aims to hide information in both intra and inter frames. The information is embedded into a compressed video by simultaneously manipulating the quantized AC coefficients (AC-QTCs) of luminance components of the frames during MPEG-2 encoding process. Later, during the decoding process, the embedded information can be detected and extracted completely. Furthermore, the EBBD basically deals with two security concepts: data encryption and data concealing. Hence, during the embedding process, secret data is encrypted using the simplified data encryption standard (S-DES) algorithm to provide better security to the implemented system. The security of the method lies in selecting candidate AC-QTCs within each non-overlapping 8 × 8 sub-block using a pseudo random key. Basic performance of this steganographic technique verified through experiments on various existing MPEG-2 encoded videos over a wide range of embedded payload rates. Overall, the experimental results verify the excellent performance of the proposed EBBD with a better trade-off in terms of imperceptibility and payload, as compared with previous techniques while at the same time ensuring minimal bitrate increase and negligible degradation of PSNR values.
Regularized Dual Averaging Image Reconstruction for Full-Wave Ultrasound Computed Tomography.
Matthews, Thomas P; Wang, Kun; Li, Cuiping; Duric, Neb; Anastasio, Mark A
2017-05-01
Ultrasound computed tomography (USCT) holds great promise for breast cancer screening. Waveform inversion-based image reconstruction methods account for higher order diffraction effects and can produce high-resolution USCT images, but are computationally demanding. Recently, a source encoding technique has been combined with stochastic gradient descent (SGD) to greatly reduce image reconstruction times. However, this method bundles the stochastic data fidelity term with the deterministic regularization term. This limitation can be overcome by replacing SGD with a structured optimization method, such as the regularized dual averaging method, that exploits knowledge of the composition of the cost function. In this paper, the dual averaging method is combined with source encoding techniques to improve the effectiveness of regularization while maintaining the reduced reconstruction times afforded by source encoding. It is demonstrated that each iteration can be decomposed into a gradient descent step based on the data fidelity term and a proximal update step corresponding to the regularization term. Furthermore, the regularization term is never explicitly differentiated, allowing nonsmooth regularization penalties to be naturally incorporated. The wave equation is solved by the use of a time-domain method. The effectiveness of this approach is demonstrated through computer simulation and experimental studies. The results suggest that the dual averaging method can produce images with less noise and comparable resolution to those obtained by the use of SGD.
A deep learning method for lincRNA detection using auto-encoder algorithm.
Yu, Ning; Yu, Zeng; Pan, Yi
2017-12-06
RNA sequencing technique (RNA-seq) enables scientists to develop novel data-driven methods for discovering more unidentified lincRNAs. Meantime, knowledge-based technologies are experiencing a potential revolution ignited by the new deep learning methods. By scanning the newly found data set from RNA-seq, scientists have found that: (1) the expression of lincRNAs appears to be regulated, that is, the relevance exists along the DNA sequences; (2) lincRNAs contain some conversed patterns/motifs tethered together by non-conserved regions. The two evidences give the reasoning for adopting knowledge-based deep learning methods in lincRNA detection. Similar to coding region transcription, non-coding regions are split at transcriptional sites. However, regulatory RNAs rather than message RNAs are generated. That is, the transcribed RNAs participate the biological process as regulatory units instead of generating proteins. Identifying these transcriptional regions from non-coding regions is the first step towards lincRNA recognition. The auto-encoder method achieves 100% and 92.4% prediction accuracy on transcription sites over the putative data sets. The experimental results also show the excellent performance of predictive deep neural network on the lincRNA data sets compared with support vector machine and traditional neural network. In addition, it is validated through the newly discovered lincRNA data set and one unreported transcription site is found by feeding the whole annotated sequences through the deep learning machine, which indicates that deep learning method has the extensive ability for lincRNA prediction. The transcriptional sequences of lincRNAs are collected from the annotated human DNA genome data. Subsequently, a two-layer deep neural network is developed for the lincRNA detection, which adopts the auto-encoder algorithm and utilizes different encoding schemes to obtain the best performance over intergenic DNA sequence data. Driven by those newly annotated lincRNA data, deep learning methods based on auto-encoder algorithm can exert their capability in knowledge learning in order to capture the useful features and the information correlation along DNA genome sequences for lincRNA detection. As our knowledge, this is the first application to adopt the deep learning techniques for identifying lincRNA transcription sequences.
NASA Astrophysics Data System (ADS)
Passas, Georgios; Freear, Steven; Fawcett, Darren
2010-01-01
Space-time coding (STC) is an important milestone in modern wireless communications. In this technique, more copies of the same signal are transmitted through different antennas (space) and different symbol periods (time), to improve the robustness of a wireless system by increasing its diversity gain. STCs are channel coding algorithms that can be readily implemented on a field programmable gate array (FPGA) device. This work provides some figures for the amount of required FPGA hardware resources, the speed that the algorithms can operate and the power consumption requirements of a space-time block code (STBC) encoder. Seven encoder very high-speed integrated circuit hardware description language (VHDL) designs have been coded, synthesised and tested. Each design realises a complex orthogonal space-time block code with a different transmission matrix. All VHDL designs are parameterisable in terms of sample precision. Precisions ranging from 4 bits to 32 bits have been synthesised. Alamouti's STBC encoder design [Alamouti, S.M. (1998), 'A Simple Transmit Diversity Technique for Wireless Communications', IEEE Journal on Selected Areas in Communications, 16:55-108.] proved to be the best trade-off, since it is on average 3.2 times smaller, 1.5 times faster and requires slightly less power than the next best trade-off in the comparison, which is a 3/4-rate full-diversity 3Tx-antenna STBC.
NASA Astrophysics Data System (ADS)
Gorthi, Sai Siva; Rajshekhar, G.; Rastogi, Pramod
2010-04-01
For three-dimensional (3D) shape measurement using fringe projection techniques, the information about the 3D shape of an object is encoded in the phase of a recorded fringe pattern. The paper proposes a high-order instantaneous moments based method to estimate phase from a single fringe pattern in fringe projection. The proposed method works by approximating the phase as a piece-wise polynomial and subsequently determining the polynomial coefficients using high-order instantaneous moments to construct the polynomial phase. Simulation results are presented to show the method's potential.
An additional study and implementation of tone calibrated technique of modulation
NASA Technical Reports Server (NTRS)
Rafferty, W.; Bechtel, L. K.; Lay, N. E.
1985-01-01
The Tone Calibrated Technique (TCT) was shown to be theoretically free from an error floor, and is only limited, in practice, by implementation constraints. The concept of the TCT transmission scheme along with a baseband implementation of a suitable demodulator is introduced. Two techniques for the generation of the TCT signal are considered: a Manchester source encoding scheme (MTCT) and a subcarrier based technique (STCT). The results are summarized for the TCT link computer simulation. The hardware implementation of the MTCT system is addressed and the digital signal processing design considerations involved in satisfying the modulator/demodulator requirements are outlined. The program findings are discussed and future direction are suggested based on conclusions made regarding the suitability of the TCT system for the transmission channel presently under consideration.
Lensless Photoluminescence Hyperspectral Camera Employing Random Speckle Patterns.
Žídek, Karel; Denk, Ondřej; Hlubuček, Jiří
2017-11-10
We propose and demonstrate a spectrally-resolved photoluminescence imaging setup based on the so-called single pixel camera - a technique of compressive sensing, which enables imaging by using a single-pixel photodetector. The method relies on encoding an image by a series of random patterns. In our approach, the image encoding was maintained via laser speckle patterns generated by an excitation laser beam scattered on a diffusor. By using a spectrometer as the single-pixel detector we attained a realization of a spectrally-resolved photoluminescence camera with unmatched simplicity. We present reconstructed hyperspectral images of several model scenes. We also discuss parameters affecting the imaging quality, such as the correlation degree of speckle patterns, pattern fineness, and number of datapoints. Finally, we compare the presented technique to hyperspectral imaging using sample scanning. The presented method enables photoluminescence imaging for a broad range of coherent excitation sources and detection spectral areas.
Restoring the encoding properties of a stochastic neuron model by an exogenous noise
Paffi, Alessandra; Camera, Francesca; Apollonio, Francesca; d'Inzeo, Guglielmo; Liberti, Micaela
2015-01-01
Here we evaluate the possibility of improving the encoding properties of an impaired neuronal system by superimposing an exogenous noise to an external electric stimulation signal. The approach is based on the use of mathematical neuron models consisting of stochastic HH-like circuit, where the impairment of the endogenous presynaptic inputs is described as a subthreshold injected current and the exogenous stimulation signal is a sinusoidal voltage perturbation across the membrane. Our results indicate that a correlated Gaussian noise, added to the sinusoidal signal can significantly increase the encoding properties of the impaired system, through the Stochastic Resonance (SR) phenomenon. These results suggest that an exogenous noise, suitably tailored, could improve the efficacy of those stimulation techniques used in neuronal systems, where the presynaptic sensory neurons are impaired and have to be artificially bypassed. PMID:25999845
NASA Astrophysics Data System (ADS)
Bhooplapur, Sharad; Akbulut, Mehmetkan; Quinlan, Franklyn; Delfyett, Peter J.
2010-04-01
A novel scheme for recognition of electronic bit-sequences is demonstrated. Two electronic bit-sequences that are to be compared are each mapped to a unique code from a set of Walsh-Hadamard codes. The codes are then encoded in parallel on the spectral phase of the frequency comb lines from a frequency-stabilized mode-locked semiconductor laser. Phase encoding is achieved by using two independent spatial light modulators based on liquid crystal arrays. Encoded pulses are compared using interferometric pulse detection and differential balanced photodetection. Orthogonal codes eight bits long are compared, and matched codes are successfully distinguished from mismatched codes with very low error rates, of around 10-18. This technique has potential for high-speed, high accuracy recognition of bit-sequences, with applications in keyword searches and internet protocol packet routing.
Weight and power savings shaft encoder interfacing techniques for aerospace applications
NASA Technical Reports Server (NTRS)
Breslow, Donald H.
1986-01-01
Many aerospace applications for shaft angle digitizers such as optical shaft encoders require special features that are not usually required on commercial products. Among the most important user considerations are the lowest possible weight and power consumption. A variety of mechanical and electrical interface techniques that have large potential weight and power savings are described. The principles to be presented apply to a wide variety of encoders, ranging from 16 to 22 bit resolution and with diameters from 152 to 380 mm (6 to 15 in.).
An edge preserving differential image coding scheme
NASA Technical Reports Server (NTRS)
Rost, Martin C.; Sayood, Khalid
1992-01-01
Differential encoding techniques are fast and easy to implement. However, a major problem with the use of differential encoding for images is the rapid edge degradation encountered when using such systems. This makes differential encoding techniques of limited utility, especially when coding medical or scientific images, where edge preservation is of utmost importance. A simple, easy to implement differential image coding system with excellent edge preservation properties is presented. The coding system can be used over variable rate channels, which makes it especially attractive for use in the packet network environment.
Hydrogel microparticles for biosensing
Le Goff, Gaelle C.; Srinivas, Rathi L.; Hill, W. Adam; Doyle, Patrick S.
2015-01-01
Due to their hydrophilic, biocompatible, and highly tunable nature, hydrogel materials have attracted strong interest in the recent years for numerous biotechnological applications. In particular, their solution-like environment and non-fouling nature in complex biological samples render hydrogels as ideal substrates for biosensing applications. Hydrogel coatings, and later, gel dot surface microarrays, were successfully used in sensitive nucleic acid assays and immunoassays. More recently, new microfabrication techniques for synthesizing encoded particles from hydrogel materials have enabled the development of hydrogel-based suspension arrays. Lithography processes and droplet-based microfluidic techniques enable generation of libraries of particles with unique spectral or graphical codes, for multiplexed sensing in biological samples. In this review, we discuss the key questions arising when designing hydrogel particles dedicated to biosensing. How can the hydrogel material be engineered in order to tune its properties and immobilize bioprobes inside? What are the strategies to fabricate and encode gel particles, and how can particles be processed and decoded after the assay? Finally, we review the bioassays reported so far in the literature that have used hydrogel particle arrays and give an outlook of further developments of the field. PMID:26594056
Motion-adaptive model-assisted compatible coding with spatiotemporal scalability
NASA Astrophysics Data System (ADS)
Lee, JaeBeom; Eleftheriadis, Alexandros
1997-01-01
We introduce the concept of motion adaptive spatio-temporal model-assisted compatible (MA-STMAC) coding, a technique to selectively encode areas of different importance to the human eye in terms of space and time in moving images with the consideration of object motion. PRevious STMAC was proposed base don the fact that human 'eye contact' and 'lip synchronization' are very important in person-to-person communication. Several areas including the eyes and lips need different types of quality, since different areas have different perceptual significance to human observers. The approach provides a better rate-distortion tradeoff than conventional image coding techniques base don MPEG-1, MPEG- 2, H.261, as well as H.263. STMAC coding is applied on top of an encoder, taking full advantage of its core design. Model motion tracking in our previous STMAC approach was not automatic. The proposed MA-STMAC coding considers the motion of the human face within the STMAC concept using automatic area detection. Experimental results are given using ITU-T H.263, addressing very low bit-rate compression.
NASA Astrophysics Data System (ADS)
Wachowicz, K.; Murray, B.; Fallone, B. G.
2018-06-01
The recent interest in the integration of external beam radiotherapy with a magnetic resonance (MR) imaging unit offers the potential for real-time adaptive tumour tracking during radiation treatment. The tracking of large tumours which follow a rapid trajectory may best be served by the generation of a projection image from the perspective of the beam source, or ‘beam’s eye view’ (BEV). This type of image projection represents the path of the radiation beam, thus enabling rapid compensations for target translations, rotations and deformations, as well time-dependent critical structure avoidance. MR units have been traditionally incapable of this type of imaging except through lengthy 3D acquisitions and ray tracing procedures. This work investigates some changes to the traditional MR scanner architecture that would permit the direct acquisition of a BEV image suitable for integration with external beam radiotherapy. Based on the theory presented in this work, a phantom was imaged with nonlinear encoding-gradient field patterns to demonstrate the technique. The phantom was constructed with agarose gel tubes spaced two cm apart at their base and oriented to converge towards an imaginary beam source 100 cm away. A corresponding virtual phantom was also created and subjected to the same encoding technique as in the physical demonstration, allowing the method to be tested without hardware limitations. The experimentally acquired and simulated images indicate the feasibility of the technique, showing a substantial amount of blur reduction in a diverging phantom compared to the conventional imaging geometry, particularly with the nonlinear gradients ideally implemented. The theory is developed to demonstrate that the method can be adapted in a number of different configurations to accommodate all proposed integration schemes for MR units and radiotherapy sources. Depending on the configuration, the implementation of this technique will require between two and four additional nonlinear encoding coils.
Recent developments of genetically encoded optical sensors for cell biology.
Bolbat, Andrey; Schultz, Carsten
2017-01-01
Optical sensors are powerful tools for live cell research as they permit to follow the location, concentration changes or activities of key cellular players such as lipids, ions and enzymes. Most of the current sensor probes are based on fluorescence which provides great spatial and temporal precision provided that high-end microscopy is used and that the timescale of the event of interest fits the response time of the sensor. Many of the sensors developed in the past 20 years are genetically encoded. There is a diversity of designs leading to simple or sometimes complicated applications for the use in live cells. Genetically encoded sensors began to emerge after the discovery of fluorescent proteins, engineering of their improved optical properties and the manipulation of their structure through application of circular permutation. In this review, we will describe a variety of genetically encoded biosensor concepts, including those for intensiometric and ratiometric sensors based on single fluorescent proteins, Forster resonance energy transfer-based sensors, sensors utilising bioluminescence, sensors using self-labelling SNAP- and CLIP-tags, and finally tetracysteine-based sensors. We focus on the newer developments and discuss the current approaches and techniques for design and application. This will demonstrate the power of using optical sensors in cell biology and will help opening the field to more systematic applications in the future. © 2016 Société Française des Microscopies and Société de Biologie Cellulaire de France. Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Gueddana, Amor; Attia, Moez; Chatta, Rihab
2015-03-01
In this work, we study the error sources standing behind the non-perfect linear optical quantum components composing a non-deterministic quantum CNOT gate model, which performs the CNOT function with a success probability of 4/27 and uses a double encoding technique to represent photonic qubits at the control and the target. We generalize this model to an abstract probabilistic CNOT version and determine the realizability limits depending on a realistic range of the errors. Finally, we discuss physical constraints allowing the implementation of the Asymmetric Partially Polarizing Beam Splitter (APPBS), which is at the heart of correctly realizing the CNOT function.
A Formalisation of Adaptable Pervasive Flows
NASA Astrophysics Data System (ADS)
Bucchiarone, Antonio; Lafuente, Alberto Lluch; Marconi, Annapaola; Pistore, Marco
Adaptable Pervasive Flows is a novel workflow-based paradigm for the design and execution of pervasive applications, where dynamic workflows situated in the real world are able to modify their execution in order to adapt to changes in their environment. In this paper, we study a formalisation of such flows by means of a formal flow language. More precisely, we define APFoL (Adaptable Pervasive Flow Language) and formalise its textual notation by encoding it in Blite, a formalisation of WS-BPEL. The encoding in Blite equips the language with a formal semantics and enables the use of automated verification techniques. We illustrate the approach with an example of a Warehouse Case Study.
Low-Complexity Lossless and Near-Lossless Data Compression Technique for Multispectral Imagery
NASA Technical Reports Server (NTRS)
Xie, Hua; Klimesh, Matthew A.
2009-01-01
This work extends the lossless data compression technique described in Fast Lossless Compression of Multispectral- Image Data, (NPO-42517) NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26. The original technique was extended to include a near-lossless compression option, allowing substantially smaller compressed file sizes when a small amount of distortion can be tolerated. Near-lossless compression is obtained by including a quantization step prior to encoding of prediction residuals. The original technique uses lossless predictive compression and is designed for use on multispectral imagery. A lossless predictive data compression algorithm compresses a digitized signal one sample at a time as follows: First, a sample value is predicted from previously encoded samples. The difference between the actual sample value and the prediction is called the prediction residual. The prediction residual is encoded into the compressed file. The decompressor can form the same predicted sample and can decode the prediction residual from the compressed file, and so can reconstruct the original sample. A lossless predictive compression algorithm can generally be converted to a near-lossless compression algorithm by quantizing the prediction residuals prior to encoding them. In this case, since the reconstructed sample values will not be identical to the original sample values, the encoder must determine the values that will be reconstructed and use these values for predicting later sample values. The technique described here uses this method, starting with the original technique, to allow near-lossless compression. The extension to allow near-lossless compression adds the ability to achieve much more compression when small amounts of distortion are tolerable, while retaining the low complexity and good overall compression effectiveness of the original algorithm.
Security enhanced BioEncoding for protecting iris codes
NASA Astrophysics Data System (ADS)
Ouda, Osama; Tsumura, Norimichi; Nakaguchi, Toshiya
2011-06-01
Improving the security of biometric template protection techniques is a key prerequisite for the widespread deployment of biometric technologies. BioEncoding is a recently proposed template protection scheme, based on the concept of cancelable biometrics, for protecting biometric templates represented as binary strings such as iris codes. The main advantage of BioEncoding over other template protection schemes is that it does not require user-specific keys and/or tokens during verification. Besides, it satisfies all the requirements of the cancelable biometrics construct without deteriorating the matching accuracy. However, although it has been shown that BioEncoding is secure enough against simple brute-force search attacks, the security of BioEncoded templates against more smart attacks, such as record multiplicity attacks, has not been sufficiently investigated. In this paper, a rigorous security analysis of BioEncoding is presented. Firstly, resistance of BioEncoded templates against brute-force attacks is revisited thoroughly. Secondly, we show that although the cancelable transformation employed in BioEncoding might be non-invertible for a single protected template, the original iris code could be inverted by correlating several templates used in different applications but created from the same iris. Accordingly, we propose an important modification to the BioEncoding transformation process in order to hinder attackers from exploiting this type of attacks. The effectiveness of adopting the suggested modification is validated and its impact on the matching accuracy is investigated empirically using CASIA-IrisV3-Interval dataset. Experimental results confirm the efficacy of the proposed approach and show that it preserves the matching accuracy of the unprotected iris recognition system.
Efficient Parallel Video Processing Techniques on GPU: From Framework to Implementation
Su, Huayou; Wen, Mei; Wu, Nan; Ren, Ju; Zhang, Chunyuan
2014-01-01
Through reorganizing the execution order and optimizing the data structure, we proposed an efficient parallel framework for H.264/AVC encoder based on massively parallel architecture. We implemented the proposed framework by CUDA on NVIDIA's GPU. Not only the compute intensive components of the H.264 encoder are parallelized but also the control intensive components are realized effectively, such as CAVLC and deblocking filter. In addition, we proposed serial optimization methods, including the multiresolution multiwindow for motion estimation, multilevel parallel strategy to enhance the parallelism of intracoding as much as possible, component-based parallel CAVLC, and direction-priority deblocking filter. More than 96% of workload of H.264 encoder is offloaded to GPU. Experimental results show that the parallel implementation outperforms the serial program by 20 times of speedup ratio and satisfies the requirement of the real-time HD encoding of 30 fps. The loss of PSNR is from 0.14 dB to 0.77 dB, when keeping the same bitrate. Through the analysis to the kernels, we found that speedup ratios of the compute intensive algorithms are proportional with the computation power of the GPU. However, the performance of the control intensive parts (CAVLC) is much related to the memory bandwidth, which gives an insight for new architecture design. PMID:24757432
Zheng, Linli; Ge, Yumei; Hu, Weilin; Yan, Jie
2013-03-01
To determine expression changes of major outer membrane protein(OMP) antigens of Leptospira interrogans serogroup Icterohaemorrhagiae serovar Lai strain Lai during infection of human macrophages and its mechanism. OmpR encoding genes and OmpR-related histidine kinase (HK) encoding gene of L.interrogans strain Lai and their functional domains were predicted using bioinformatics technique. mRNA level changes of the leptospiral major OMP-encoding genes before and after infection of human THP-1 macrophages were detected by real-time fluorescence quantitative RT-PCR. Effects of the OmpR-encoding genes and HK-encoding gene on the expression of leptospiral OMPs during infection were determined by HK-peptide antiserum block assay and closantel inhibitive assays. The bioinformatics analysis indicated that LB015 and LB333 were referred to OmpR-encoding genes of the spirochete, while LB014 might act as a OmpR-related HK-encoding gene. After the spirochete infecting THP-1 cells, mRNA levels of leptospiral lipL21, lipL32 and lipL41 genes were rapidly and persistently down-regulated (P <0.01), whereas mRNA levels of leptospiral groEL, mce, loa22 and ligB genes were rapidly but transiently up-regulated (P<0.01). The treatment with closantel and HK-peptide antiserum partly reversed the infection-based down-regulated mRNA levels of lipL21 and lipL48 genes (P <0.01). Moreover, closantel caused a decrease of the infection-based up-regulated mRNA levels of groEL, mce, loa22 and ligB genes (P <0.01). Expression levels of L.interrogans strain Lai major OMP antigens present notable changes during infection of human macrophages. There is a group of OmpR-and HK-encoding genes which may play a major role in down-regulation of expression levels of partial OMP antigens during infection.
Vector Adaptive/Predictive Encoding Of Speech
NASA Technical Reports Server (NTRS)
Chen, Juin-Hwey; Gersho, Allen
1989-01-01
Vector adaptive/predictive technique for digital encoding of speech signals yields decoded speech of very good quality after transmission at coding rate of 9.6 kb/s and of reasonably good quality at 4.8 kb/s. Requires 3 to 4 million multiplications and additions per second. Combines advantages of adaptive/predictive coding, and code-excited linear prediction, yielding speech of high quality but requires 600 million multiplications and additions per second at encoding rate of 4.8 kb/s. Vector adaptive/predictive coding technique bridges gaps in performance and complexity between adaptive/predictive coding and code-excited linear prediction.
A new encoding scheme for visible light communications with applications to mobile connections
NASA Astrophysics Data System (ADS)
Benton, David M.; St. John Brittan, Paul
2017-10-01
A new, novel and unconventional encoding scheme called concurrent coding, has recently been demonstrated and shown to offer interesting features and benefits in comparison to conventional techniques, such as robustness against burst errors and improved efficiency of transmitted power. Free space optical communications can suffer particularly from issues of alignment which requires stable, fixed links to be established and beam wander which can interrupt communications. Concurrent coding has the potential to help ease these difficulties and enable mobile, flexible optical communications to be implemented through the use of a source encoding technique. This concept has been applied for the first time to optical communications where standard light emitting diodes (LEDs) have been used to transmit information encoded with concurrent coding. The technique successfully transmits and decodes data despite unpredictable interruptions to the transmission causing significant drop-outs to the detected signal. The technique also shows how it is possible to send a single block of data in isolation with no pre-synchronisation required between transmitter and receiver, and no specific synchronisation sequence appended to the transmission. Such systems are robust against interference - intentional or otherwise - as well as intermittent beam blockage.
NASA Astrophysics Data System (ADS)
Sait, Abdulrahman S.
This dissertation presents a reliable technique for monitoring the condition of rotating machinery by applying instantaneous angular speed (IAS) analysis. A new analysis of the effects of changes in the orientation of the line of action and the pressure angle of the resultant force acting on gear tooth profile of spur gear under different levels of tooth damage is utilized. The analysis and experimental work discussed in this dissertation provide a clear understating of the effects of damage on the IAS by analyzing the digital signals output of rotary incremental optical encoder. A comprehensive literature review of state of the knowledge in condition monitoring and fault diagnostics of rotating machinery, including gearbox system is presented. Progress and new developments over the past 30 years in failure detection techniques of rotating machinery including engines, bearings and gearboxes are thoroughly reviewed. This work is limited to the analysis of a gear train system with gear tooth surface faults utilizing angular motion analysis technique. Angular motion data were acquired using an incremental optical encoder. Results are compared to a vibration-based technique. The vibration data were acquired using an accelerometer. The signals were obtained and analyzed in the phase domains using signal averaging to determine the existence and position of faults on the gear train system. Forces between the mating teeth surfaces are analyzed and simulated to validate the influence of the presence of damage on the pressure angle and the IAS. National Instruments hardware is used and NI LabVIEW software code is developed for real-time, online condition monitoring systems and fault detection techniques. The sensitivity of optical encoders to gear fault detection techniques is experimentally investigated by applying IAS analysis under different gear damage levels and different operating conditions. A reliable methodology is developed for selecting appropriate testing/operating conditions of a rotating system to generate an alarm system for damage detection.
A High Resolution Graphic Input System for Interactive Graphic Display Terminals. Appendix B.
ERIC Educational Resources Information Center
Van Arsdall, Paul Jon
The search for a satisfactory computer graphics input system led to this version of an analog sheet encoder which is transparent and requires no special probes. The goal of the research was to provide high resolution touch input capabilities for an experimental minicomputer based intelligent terminal system. The technique explored is compatible…
A Study of Computer Techniques for Music Research. Final Report.
ERIC Educational Resources Information Center
Lincoln, Harry B.
Work in three areas comprised this study of computer use in thematic indexing for music research: (1) acquisition, encoding, and keypunching of data--themes of which now number about 50,000 (primarily 16th Century Italian vocal music) and serve as a test base for program development; (2) development of computer programs to process this data; and…
Binding Affinity prediction with Property Encoded Shape Distribution signatures
Das, Sourav; Krein, Michael P.
2010-01-01
We report the use of the molecular signatures known as “Property-Encoded Shape Distributions” (PESD) together with standard Support Vector Machine (SVM) techniques to produce validated models that can predict the binding affinity of a large number of protein ligand complexes. This “PESD-SVM” method uses PESD signatures that encode molecular shapes and property distributions on protein and ligand surfaces as features to build SVM models that require no subjective feature selection. A simple protocol was employed for tuning the SVM models during their development, and the results were compared to SFCscore – a regression-based method that was previously shown to perform better than 14 other scoring functions. Although the PESD-SVM method is based on only two surface property maps, the overall results were comparable. For most complexes with a dominant enthalpic contribution to binding (ΔH/-TΔS > 3), a good correlation between true and predicted affinities was observed. Entropy and solvent were not considered in the present approach and further improvement in accuracy would require accounting for these components rigorously. PMID:20095526
Improvement of encoding and retrieval in normal and pathological aging with word-picture paradigm.
Iodice, Rosario; Meilán, Juan José G; Carro, Juan
2015-01-01
During the aging process, there is a progressive deficit in the encoding of new information and its retrieval. Different strategies are used in order to maintain, optimize or diminish these deficits in people with and without dementia. One of the classic techniques is paired-associate learning (PAL), which is based on improving the encoding of memories, but it has yet to be used to its full potential in people with dementia. In this study, our aim is to corroborate the importance of PAL tasks as instrumental tools for creating contextual cues, during both the encoding and retrieval phases of memory. Additionally, we aim to identify the most effective form of presenting the related items. Pairs of stimuli were shown to healthy elderly people and to patients with moderate and mild Alzheimer's disease. The encoding conditions were as follows: word/word, picture/picture, picture/word, and word/picture. Associative cued recall of the second item in the pair shows that retrieval is higher for the word/picture condition in the two groups of patients with dementia when compared to the other conditions, while word/word is the least effective in all cases. These results confirm that PAL is an effective tool for creating contextual cues during both the encoding and retrieval phases in people with dementia when the items are presented using the word/picture condition. In this way, the encoding and retrieval deficit can be reduced in these people.
Enhancing prospective memory in mild cognitive impairment: The role of enactment.
Pereira, Antonina; de Mendonça, Alexandre; Silva, Dina; Guerreiro, Manuela; Freeman, Jayne; Ellis, Judi
2015-01-01
Prospective memory (PM) is a fundamental requirement for independent living which might be prematurely compromised in the neurodegenerative process, namely in mild cognitive impairment (MCI), a typical prodromal Alzheimer's disease (AD) phase. Most encoding manipulations that typically enhance learning in healthy adults are of minimal benefit to AD patients. However, there is some indication that these can display a recall advantage when encoding is accompanied by the physical enactment of the material. The aim of this study was to explore the potential benefits of enactment at encoding and cue-action relatedness on memory for intentions in MCI patients and healthy controls using a behavioral PM experimental paradigm. We report findings examining the influence of enactment at encoding for PM performance in MCI patients and age- and education-matched controls using a laboratory-based PM task with a factorial independent design. PM performance was consistently superior when physical enactment was used at encoding and when target-action pairs were strongly associated. Importantly, these beneficial effects were cumulative and observable across both a healthy and a cognitively impaired lifespan as well as evident in the perceived subjective difficulty in performing the task. The identified beneficial effects of enacted encoding and semantic relatedness have unveiled the potential contribution of this encoding technique to optimize attentional demands through an adaptive allocation of strategic resources. We discuss our findings with respect to their potential impact on developing strategies to improve PM in AD sufferers.
Self-organized Evaluation of Dynamic Hand Gestures for Sign Language Recognition
NASA Astrophysics Data System (ADS)
Buciu, Ioan; Pitas, Ioannis
Two main theories exist with respect to face encoding and representation in the human visual system (HVS). The first one refers to the dense (holistic) representation of the face, where faces have "holon"-like appearance. The second one claims that a more appropriate face representation is given by a sparse code, where only a small fraction of the neural cells corresponding to face encoding is activated. Theoretical and experimental evidence suggest that the HVS performs face analysis (encoding, storing, face recognition, facial expression recognition) in a structured and hierarchical way, where both representations have their own contribution and goal. According to neuropsychological experiments, it seems that encoding for face recognition, relies on holistic image representation, while a sparse image representation is used for facial expression analysis and classification. From the computer vision perspective, the techniques developed for automatic face and facial expression recognition fall into the same two representation types. Like in Neuroscience, the techniques which perform better for face recognition yield a holistic image representation, while those techniques suitable for facial expression recognition use a sparse or local image representation. The proposed mathematical models of image formation and encoding try to simulate the efficient storing, organization and coding of data in the human cortex. This is equivalent with embedding constraints in the model design regarding dimensionality reduction, redundant information minimization, mutual information minimization, non-negativity constraints, class information, etc. The presented techniques are applied as a feature extraction step followed by a classification method, which also heavily influences the recognition results.
A Real-Time High Performance Data Compression Technique For Space Applications
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu; Venbrux, Jack; Bhatia, Prakash; Miller, Warner H.
2000-01-01
A high performance lossy data compression technique is currently being developed for space science applications under the requirement of high-speed push-broom scanning. The technique is also error-resilient in that error propagation is contained within a few scan lines. The algorithm is based on block-transform combined with bit-plane encoding; this combination results in an embedded bit string with exactly the desirable compression rate. The lossy coder is described. The compression scheme performs well on a suite of test images typical of images from spacecraft instruments. Hardware implementations are in development; a functional chip set is expected by the end of 2001.
Kawano, Tomonori
2013-03-01
There have been a wide variety of approaches for handling the pieces of DNA as the "unplugged" tools for digital information storage and processing, including a series of studies applied to the security-related area, such as DNA-based digital barcodes, water marks and cryptography. In the present article, novel designs of artificial genes as the media for storing the digitally compressed data for images are proposed for bio-computing purpose while natural genes principally encode for proteins. Furthermore, the proposed system allows cryptographical application of DNA through biochemically editable designs with capacity for steganographical numeric data embedment. As a model case of image-coding DNA technique application, numerically and biochemically combined protocols are employed for ciphering the given "passwords" and/or secret numbers using DNA sequences. The "passwords" of interest were decomposed into single letters and translated into the font image coded on the separate DNA chains with both the coding regions in which the images are encoded based on the novel run-length encoding rule, and the non-coding regions designed for biochemical editing and the remodeling processes revealing the hidden orientation of letters composing the original "passwords." The latter processes require the molecular biological tools for digestion and ligation of the fragmented DNA molecules targeting at the polymerase chain reaction-engineered termini of the chains. Lastly, additional protocols for steganographical overwriting of the numeric data of interests over the image-coding DNA are also discussed.
On techniques for angle compensation in nonideal iris recognition.
Schuckers, Stephanie A C; Schmid, Natalia A; Abhyankar, Aditya; Dorairaj, Vivekanand; Boyce, Christopher K; Hornak, Lawrence A
2007-10-01
The popularity of the iris biometric has grown considerably over the past two to three years. Most research has been focused on the development of new iris processing and recognition algorithms for frontal view iris images. However, a few challenging directions in iris research have been identified, including processing of a nonideal iris and iris at a distance. In this paper, we describe two nonideal iris recognition systems and analyze their performance. The word "nonideal" is used in the sense of compensating for off-angle occluded iris images. The system is designed to process nonideal iris images in two steps: 1) compensation for off-angle gaze direction and 2) processing and encoding of the rotated iris image. Two approaches are presented to account for angular variations in the iris images. In the first approach, we use Daugman's integrodifferential operator as an objective function to estimate the gaze direction. After the angle is estimated, the off-angle iris image undergoes geometric transformations involving the estimated angle and is further processed as if it were a frontal view image. The encoding technique developed for a frontal image is based on the application of the global independent component analysis. The second approach uses an angular deformation calibration model. The angular deformations are modeled, and calibration parameters are calculated. The proposed method consists of a closed-form solution, followed by an iterative optimization procedure. The images are projected on the plane closest to the base calibrated plane. Biorthogonal wavelets are used for encoding to perform iris recognition. We use a special dataset of the off-angle iris images to quantify the performance of the designed systems. A series of receiver operating characteristics demonstrate various effects on the performance of the nonideal-iris-based recognition system.
NASA Astrophysics Data System (ADS)
Al-Mansoori, Saeed; Kunhu, Alavi
2013-10-01
This paper proposes a blind multi-watermarking scheme based on designing two back-to-back encoders. The first encoder is implemented to embed a robust watermark into remote sensing imagery by applying a Discrete Cosine Transform (DCT) approach. Such watermark is used in many applications to protect the copyright of the image. However, the second encoder embeds a fragile watermark using `SHA-1' hash function. The purpose behind embedding a fragile watermark is to prove the authenticity of the image (i.e. tamper-proof). Thus, the proposed technique was developed as a result of new challenges with piracy of remote sensing imagery ownership. This led researchers to look for different means to secure the ownership of satellite imagery and prevent the illegal use of these resources. Therefore, Emirates Institution for Advanced Science and Technology (EIAST) proposed utilizing existing data security concept by embedding a digital signature, "watermark", into DubaiSat-1 satellite imagery. In this study, DubaiSat-1 images with 2.5 meter resolution are used as a cover and a colored EIAST logo is used as a watermark. In order to evaluate the robustness of the proposed technique, a couple of attacks are applied such as JPEG compression, rotation and synchronization attacks. Furthermore, tampering attacks are applied to prove image authenticity.
Sharif, Behzad; Derbyshire, J. Andrew; Faranesh, Anthony Z.; Bresler, Yoram
2010-01-01
MR imaging of the human heart without explicit cardiac synchronization promises to extend the applicability of cardiac MR to a larger patient population and potentially expand its diagnostic capabilities. However, conventional non-gated imaging techniques typically suffer from low image quality or inadequate spatio-temporal resolution and fidelity. Patient-Adaptive Reconstruction and Acquisition in Dynamic Imaging with Sensitivity Encoding (PARADISE) is a highly-accelerated non-gated dynamic imaging method that enables artifact-free imaging with high spatio-temporal resolutions by utilizing novel computational techniques to optimize the imaging process. In addition to using parallel imaging, the method gains acceleration from a physiologically-driven spatio-temporal support model; hence, it is doubly accelerated. The support model is patient-adaptive, i.e., its geometry depends on dynamics of the imaged slice, e.g., subject’s heart-rate and heart location within the slice. The proposed method is also doubly adaptive as it adapts both the acquisition and reconstruction schemes. Based on the theory of time-sequential sampling, the proposed framework explicitly accounts for speed limitations of gradient encoding and provides performance guarantees on achievable image quality. The presented in-vivo results demonstrate the effectiveness and feasibility of the PARADISE method for high resolution non-gated cardiac MRI during a short breath-hold. PMID:20665794
Cyclic motion encoding for enhanced MR visualization of slip interfaces.
Mariappan, Yogesh K; Glaser, Kevin J; Manduca, Armando; Ehman, Richard L
2009-10-01
To develop and test a magnetic resonance imaging-based method for assessing the mechanical shear connectivity across tissue interfaces with phantom experiments and in vivo feasibility studies. External vibrations were applied to phantoms and tissue and the differential motion on either side of interfaces within the media was mapped onto the phase of the MR images using cyclic motion encoding gradients. The phase variations within the voxels of functional slip interfaces reduced the net magnitude signal in those regions, thus enhancing their visualization. A simple two-compartment model was developed to relate this signal loss to the intravoxel phase variations. In vivo studies of the abdomen and forearm were performed to visualize slip interfaces in healthy volunteers. The phantom experiments demonstrated that the proposed technique can assess the functionality of shear slip interfaces and they provided experimental validation for the theoretical model developed. Studies of the abdomen showed that the slip interface between the small bowel and the peritoneal wall can be visualized. In the forearm, this technique was able to depict the slip interfaces between the functional compartments of the extrinsic forearm muscles. Functional shear slip interfaces can be visualized sensitively using cyclic motion encoding of externally applied tissue vibrations. (c) 2009 Wiley-Liss, Inc.
Ripple artifact reduction using slice overlap in slice encoding for metal artifact correction.
den Harder, J Chiel; van Yperen, Gert H; Blume, Ulrike A; Bos, Clemens
2015-01-01
Multispectral imaging (MSI) significantly reduces metal artifacts. Yet, especially in techniques that use gradient selection, such as slice encoding for metal artifact correction (SEMAC), a residual ripple artifact may be prominent. Here, an analysis is presented of the ripple artifact and of slice overlap as an approach to reduce the artifact. The ripple artifact was analyzed theoretically to clarify its cause. Slice overlap, conceptually similar to spectral bin overlap in multi-acquisition with variable resonances image combination (MAVRIC), was achieved by reducing the selection gradient and, thus, increasing the slice profile width. Time domain simulations and phantom experiments were performed to validate the analyses and proposed solution. Discontinuities between slices are aggravated by signal displacement in the frequency encoding direction in areas with deviating B0. Specifically, it was demonstrated that ripple artifacts appear only where B0 varies both in-plane and through-plane. Simulations and phantom studies of metal implants confirmed the efficacy of slice overlap to reduce the artifact. The ripple artifact is an important limitation of gradient selection based MSI techniques, and can be understood using the presented simulations. At a scan-time penalty, slice overlap effectively addressed the artifact, thereby improving image quality near metal implants. © 2014 Wiley Periodicals, Inc.
Costa, Marcus V C; Carvalho, Joao L A; Berger, Pedro A; Zaghetto, Alexandre; da Rocha, Adson F; Nascimento, Francisco A O
2009-01-01
We present a new preprocessing technique for two-dimensional compression of surface electromyographic (S-EMG) signals, based on correlation sorting. We show that the JPEG2000 coding system (originally designed for compression of still images) and the H.264/AVC encoder (video compression algorithm operating in intraframe mode) can be used for compression of S-EMG signals. We compare the performance of these two off-the-shelf image compression algorithms for S-EMG compression, with and without the proposed preprocessing step. Compression of both isotonic and isometric contraction S-EMG signals is evaluated. The proposed methods were compared with other S-EMG compression algorithms from the literature.
1994-03-01
evaluation of its anticipated value. If the program can be accomplished using conventional techniques , this should be seriously considered. Development or...the direct frequency generating principles such as, pulse tachos, turbine flowmeters, and encoders, also Doppler and laser techniques used for...CERAMIC BLOCK Figure 5.3. The basic concepts of the laser ring gyro (LRG). The principle depends upon the guidance of two beams of laser light around an
NASA Astrophysics Data System (ADS)
Zhang, Leihong; Pan, Zilan; Liang, Dong; Ma, Xiuhua; Zhang, Dawei
2015-12-01
An optical encryption method based on compressive ghost imaging (CGI) with double random-phase encoding (DRPE), named DRPE-CGI, is proposed. The information is first encrypted by the sender with DRPE, the DRPE-coded image is encrypted by the system of computational ghost imaging with a secret key. The key of N random-phase vectors is generated by the sender and will be shared with the receiver who is the authorized user. The receiver decrypts the DRPE-coded image with the key, with the aid of CGI and a compressive sensing technique, and then reconstructs the original information by the technique of DRPE-decoding. The experiments suggest that cryptanalysts cannot get any useful information about the original image even if they eavesdrop 60% of the key at a given time, so the security of DRPE-CGI is higher than that of the security of conventional ghost imaging. Furthermore, this method can reduce 40% of the information quantity compared with ghost imaging while the qualities of reconstructing the information are the same. It can also improve the quality of the reconstructed plaintext information compared with DRPE-GI with the same sampling times. This technique can be immediately applied to encryption and data storage with the advantages of high security, fast transmission, and high quality of reconstructed information.
Design, synthesis and selection of DNA-encoded small-molecule libraries.
Clark, Matthew A; Acharya, Raksha A; Arico-Muendel, Christopher C; Belyanskaya, Svetlana L; Benjamin, Dennis R; Carlson, Neil R; Centrella, Paolo A; Chiu, Cynthia H; Creaser, Steffen P; Cuozzo, John W; Davie, Christopher P; Ding, Yun; Franklin, G Joseph; Franzen, Kurt D; Gefter, Malcolm L; Hale, Steven P; Hansen, Nils J V; Israel, David I; Jiang, Jinwei; Kavarana, Malcolm J; Kelley, Michael S; Kollmann, Christopher S; Li, Fan; Lind, Kenneth; Mataruse, Sibongile; Medeiros, Patricia F; Messer, Jeffrey A; Myers, Paul; O'Keefe, Heather; Oliff, Matthew C; Rise, Cecil E; Satz, Alexander L; Skinner, Steven R; Svendsen, Jennifer L; Tang, Lujia; van Vloten, Kurt; Wagner, Richard W; Yao, Gang; Zhao, Baoguang; Morgan, Barry A
2009-09-01
Biochemical combinatorial techniques such as phage display, RNA display and oligonucleotide aptamers have proven to be reliable methods for generation of ligands to protein targets. Adapting these techniques to small synthetic molecules has been a long-sought goal. We report the synthesis and interrogation of an 800-million-member DNA-encoded library in which small molecules are covalently attached to an encoding oligonucleotide. The library was assembled by a combination of chemical and enzymatic synthesis, and interrogated by affinity selection. We describe methods for the selection and deconvolution of the chemical display library, and the discovery of inhibitors for two enzymes: Aurora A kinase and p38 MAP kinase.
Radiographic applications of spatial frequency multiplexing
NASA Technical Reports Server (NTRS)
Macovski, A.
1981-01-01
The application of spacial frequency encoding techniques which allow different regions of the X-ray spectrum to be encoded on conventional radiographs was studied. Clinical considerations were reviewed, as were experimental studies involving the encoding and decoding of X-ray images at different energies and the subsequent processing of the data to produce images of specific materials in the body.
NASA Astrophysics Data System (ADS)
Bentley, Joel B.; Davis, Jeffrey A.; Albero, Jorge; Moreno, Ignacio
2006-10-01
We report a new self-interferometric technique for visualizing phase patterns that are encoded onto a phase-only liquid-crystal display (LCD). In our approach, the LCD generates both the desired object beam as well as the reference beam. Normally the phase patterns are encoded with a phase depth of 2π radians, and all of the incident energy is diffracted into the first-order beam. However, by reducing this phase depth, we can generate an additional zero-order diffracted beam, which acts as the reference beam. We work at distances such that these two patterns spatially interfere, producing an interference pattern that displays the encoded phase pattern. This approach was used recently to display the phase vortices of helical Ince-Gaussian beams. Here we show additional experimental results and analyze the process.
Two novel motion-based algorithms for surveillance video analysis on embedded platforms
NASA Astrophysics Data System (ADS)
Vijverberg, Julien A.; Loomans, Marijn J. H.; Koeleman, Cornelis J.; de With, Peter H. N.
2010-05-01
This paper proposes two novel motion-vector based techniques for target detection and target tracking in surveillance videos. The algorithms are designed to operate on a resource-constrained device, such as a surveillance camera, and to reuse the motion vectors generated by the video encoder. The first novel algorithm for target detection uses motion vectors to construct a consistent motion mask, which is combined with a simple background segmentation technique to obtain a segmentation mask. The second proposed algorithm aims at multi-target tracking and uses motion vectors to assign blocks to targets employing five features. The weights of these features are adapted based on the interaction between targets. These algorithms are combined in one complete analysis application. The performance of this application for target detection has been evaluated for the i-LIDS sterile zone dataset and achieves an F1-score of 0.40-0.69. The performance of the analysis algorithm for multi-target tracking has been evaluated using the CAVIAR dataset and achieves an MOTP of around 9.7 and MOTA of 0.17-0.25. On a selection of targets in videos from other datasets, the achieved MOTP and MOTA are 8.8-10.5 and 0.32-0.49 respectively. The execution time on a PC-based platform is 36 ms. This includes the 20 ms for generating motion vectors, which are also required by the video encoder.
The "Reverse Case Study:" Enhancing Creativity in Case-Based Instruction in Leadership Studies
ERIC Educational Resources Information Center
Atkinson, Timothy N.
2014-01-01
In this application brief I share a case study assignment I used in my "Leadership in Complex Organizations" classes to promote creativity in problem solving. I sorted Ph.D. students into two teams and trained them to use creative writing techniques to "encode" theory into their own cases. A sense of competition emerged. Later,…
Modeling Image Patches with a Generic Dictionary of Mini-Epitomes
Papandreou, George; Chen, Liang-Chieh; Yuille, Alan L.
2015-01-01
The goal of this paper is to question the necessity of features like SIFT in categorical visual recognition tasks. As an alternative, we develop a generative model for the raw intensity of image patches and show that it can support image classification performance on par with optimized SIFT-based techniques in a bag-of-visual-words setting. Key ingredient of the proposed model is a compact dictionary of mini-epitomes, learned in an unsupervised fashion on a large collection of images. The use of epitomes allows us to explicitly account for photometric and position variability in image appearance. We show that this flexibility considerably increases the capacity of the dictionary to accurately approximate the appearance of image patches and support recognition tasks. For image classification, we develop histogram-based image encoding methods tailored to the epitomic representation, as well as an “epitomic footprint” encoding which is easy to visualize and highlights the generative nature of our model. We discuss in detail computational aspects and develop efficient algorithms to make the model scalable to large tasks. The proposed techniques are evaluated with experiments on the challenging PASCAL VOC 2007 image classification benchmark. PMID:26321859
Measurement of pulsatile motion with millisecond resolution by MRI.
Souchon, Rémi; Gennisson, Jean-Luc; Tanter, Mickael; Salomir, Rares; Chapelon, Jean-Yves; Rouvière, Olivier
2012-06-01
We investigated a technique based on phase-contrast cine MRI combined with deconvolution of the phase shift waveforms to measure rapidly varying pulsatile motion waveforms. The technique does not require steady-state displacement during motion encoding. Simulations and experiments were performed in porcine liver samples in view of a specific application, namely the observation of transient displacements induced by acoustic radiation force. Simulations illustrate the advantages and shortcomings of the methods. For experimental validation, the waveforms were acquired with an ultrafast ultrasound scanner (Supersonic Imagine Aixplorer), and the rates of decay of the waveforms (relaxation time) were compared. With bipolar motion-encoding gradient of 8.4 ms, the method was able to measure displacement waveforms with a temporal resolution of 1 ms over a time course of 40 ms. Reasonable agreement was found between the rate of decay of the waveforms measured in ultrasound (2.8 ms) and in MRI (2.7-3.3 ms). Copyright © 2011 Wiley-Liss, Inc.
Clarke, Patrick J.; Collins, Robert J.; Dunjko, Vedran; Andersson, Erika; Jeffers, John; Buller, Gerald S.
2012-01-01
Digital signatures are frequently used in data transfer to prevent impersonation, repudiation and message tampering. Currently used classical digital signature schemes rely on public key encryption techniques, where the complexity of so-called ‘one-way' mathematical functions is used to provide security over sufficiently long timescales. No mathematical proofs are known for the long-term security of such techniques. Quantum digital signatures offer a means of sending a message, which cannot be forged or repudiated, with security verified by information-theoretical limits and quantum mechanics. Here we demonstrate an experimental system, which distributes quantum signatures from one sender to two receivers and enables message sending ensured against forging and repudiation. Additionally, we analyse the security of the system in some typical scenarios. Our system is based on the interference of phase-encoded coherent states of light and our implementation utilizes polarization-maintaining optical fibre and photons with a wavelength of 850 nm. PMID:23132024
Dissociative effects of true and false recall as a function of different encoding strategies.
Goodwin, Kerri A
2007-01-01
Goodwin, Meissner, and Ericsson (2001) proposed a path model in which elaborative encoding predicted the likelihood of verbalisation of critical, nonpresented words at encoding, which in turn predicted the likelihood of false recall. The present study tested this model of false recall experimentally with a manipulation of encoding strategy and the implementation of the process-tracing technique of protocol analysis. Findings indicated that elaborative encoding led to more verbalisations of critical items during encoding than rote rehearsal of list items, but false recall rates were reduced under elaboration conditions (Experiment 2). Interestingly, false recall was more likely to occur when items were verbalised during encoding than not verbalised (Experiment 1), and participants tended to reinstate their encoding strategies during recall, particularly after elaborative encoding (Experiment 1). Theoretical implications for the interplay of encoding and retrieval processes of false recall are discussed.
MobileASL: intelligibility of sign language video over mobile phones.
Cavender, Anna; Vanam, Rahul; Barney, Dane K; Ladner, Richard E; Riskin, Eve A
2008-01-01
For Deaf people, access to the mobile telephone network in the United States is currently limited to text messaging, forcing communication in English as opposed to American Sign Language (ASL), the preferred language. Because ASL is a visual language, mobile video phones have the potential to give Deaf people access to real-time mobile communication in their preferred language. However, even today's best video compression techniques can not yield intelligible ASL at limited cell phone network bandwidths. Motivated by this constraint, we conducted one focus group and two user studies with members of the Deaf Community to determine the intelligibility effects of video compression techniques that exploit the visual nature of sign language. Inspired by eye tracking results that show high resolution foveal vision is maintained around the face, we studied region-of-interest encodings (where the face is encoded at higher quality) as well as reduced frame rates (where fewer, better quality, frames are displayed every second). At all bit rates studied here, participants preferred moderate quality increases in the face region, sacrificing quality in other regions. They also preferred slightly lower frame rates because they yield better quality frames for a fixed bit rate. The limited processing power of cell phones is a serious concern because a real-time video encoder and decoder will be needed. Choosing less complex settings for the encoder can reduce encoding time, but will affect video quality. We studied the intelligibility effects of this tradeoff and found that we can significantly speed up encoding time without severely affecting intelligibility. These results show promise for real-time access to the current low-bandwidth cell phone network through sign-language-specific encoding techniques.
Survey of Header Compression Techniques
NASA Technical Reports Server (NTRS)
Ishac, Joseph
2001-01-01
This report provides a summary of several different header compression techniques. The different techniques included are: (1) Van Jacobson's header compression (RFC 1144); (2) SCPS (Space Communications Protocol Standards) header compression (SCPS-TP, SCPS-NP); (3) Robust header compression (ROHC); and (4) The header compression techniques in RFC2507 and RFC2508. The methodology for compression and error correction for these schemes are described in the remainder of this document. All of the header compression schemes support compression over simplex links, provided that the end receiver has some means of sending data back to the sender. However, if that return path does not exist, then neither Van Jacobson's nor SCPS can be used, since both rely on TCP (Transmission Control Protocol). In addition, under link conditions of low delay and low error, all of the schemes perform as expected. However, based on the methodology of the schemes, each scheme is likely to behave differently as conditions degrade. Van Jacobson's header compression relies heavily on the TCP retransmission timer and would suffer an increase in loss propagation should the link possess a high delay and/or bit error rate (BER). The SCPS header compression scheme protects against high delay environments by avoiding delta encoding between packets. Thus, loss propagation is avoided. However, SCPS is still affected by an increased BER (bit-error-rate) since the lack of delta encoding results in larger header sizes. Next, the schemes found in RFC2507 and RFC2508 perform well for non-TCP connections in poor conditions. RFC2507 performance with TCP connections is improved by various techniques over Van Jacobson's, but still suffers a performance hit with poor link properties. Also, RFC2507 offers the ability to send TCP data without delta encoding, similar to what SCPS offers. ROHC is similar to the previous two schemes, but adds additional CRCs (cyclic redundancy check) into headers and improves compression schemes which provide better tolerances in conditions with a high BER.
Reducing acquisition times in multidimensional NMR with a time-optimized Fourier encoding algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Zhiyong; Department of Electronic Science, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, Xiamen University, Xiamen, Fujian 361005; Smith, Pieter E. S.
Speeding up the acquisition of multidimensional nuclear magnetic resonance (NMR) spectra is an important topic in contemporary NMR, with central roles in high-throughput investigations and analyses of marginally stable samples. A variety of fast NMR techniques have been developed, including methods based on non-uniform sampling and Hadamard encoding, that overcome the long sampling times inherent to schemes based on fast-Fourier-transform (FFT) methods. Here, we explore the potential of an alternative fast acquisition method that leverages a priori knowledge, to tailor polychromatic pulses and customized time delays for an efficient Fourier encoding of the indirect domain of an NMR experiment. Bymore » porting the encoding of the indirect-domain to the excitation process, this strategy avoids potential artifacts associated with non-uniform sampling schemes and uses a minimum number of scans equal to the number of resonances present in the indirect dimension. An added convenience is afforded by the fact that a usual 2D FFT can be used to process the generated data. Acquisitions of 2D heteronuclear correlation NMR spectra on quinine and on the anti-inflammatory drug isobutyl propionic phenolic acid illustrate the new method's performance. This method can be readily automated to deal with complex samples such as those occurring in metabolomics, in in-cell as well as in in vivo NMR applications, where speed and temporal stability are often primary concerns.« less
FRET-based genetically-encoded sensors for quantitative monitoring of metabolites.
Mohsin, Mohd; Ahmad, Altaf; Iqbal, Muhammad
2015-10-01
Neighboring cells in the same tissue can exist in different states of dynamic activities. After genomics, proteomics and metabolomics, fluxomics is now equally important for generating accurate quantitative information on the cellular and sub-cellular dynamics of ions and metabolite, which is critical for functional understanding of organisms. Various spectrometry techniques are used for monitoring ions and metabolites, although their temporal and spatial resolutions are limited. Discovery of the fluorescent proteins and their variants has revolutionized cell biology. Therefore, novel tools and methods targeting sub-cellular compartments need to be deployed in specific cells and targeted to sub-cellular compartments in order to quantify the target-molecule dynamics directly. We require tools that can measure cellular activities and protein dynamics with sub-cellular resolution. Biosensors based on fluorescence resonance energy transfer (FRET) are genetically encoded and hence can specifically target sub-cellular organelles by fusion to proteins or targetted sequences. Since last decade, FRET-based genetically encoded sensors for molecules involved in energy production, reactive oxygen species and secondary messengers have helped to unravel key aspects of cellular physiology. This review, describing the design and principles of sensors, presents a database of sensors for different analytes/processes, and illustrate examples of application in quantitative live cell imaging.
A brain-based account of “basic-level” concepts
Bauer, Andrew James; Just, Marcel Adam
2017-01-01
This study provides a brain-based account of how object concepts at an intermediate (basic) level of specificity are represented, offering an enriched view of what it means for a concept to be a basic-level concept, a research topic pioneered by Rosch and others (Rosch et al., 1976). Applying machine learning techniques to fMRI data, it was possible to determine the semantic content encoded in the neural representations of object concepts at basic and subordinate levels of abstraction. The representation of basic-level concepts (e.g. bird) was spatially broad, encompassing sensorimotor brain areas that encode concrete object properties, and also language and heteromodal integrative areas that encode abstract semantic content. The representation of subordinate-level concepts (robin) was less widely distributed, concentrated in perceptual areas that underlie concrete content. Furthermore, basic-level concepts were representative of their subordinates in that they were neurally similar to their typical but not atypical subordinates (bird was neurally similar to robin but not woodpecker). The findings provide a brain-based account of the advantages that basic-level concepts enjoy in everyday life over subordinate-level concepts: the basic level is a broad topographical representation that encompasses both concrete and abstract semantic content, reflecting the multifaceted yet intuitive meaning of basic-level concepts. PMID:28826947
A brain-based account of "basic-level" concepts.
Bauer, Andrew James; Just, Marcel Adam
2017-11-01
This study provides a brain-based account of how object concepts at an intermediate (basic) level of specificity are represented, offering an enriched view of what it means for a concept to be a basic-level concept, a research topic pioneered by Rosch and others (Rosch et al., 1976). Applying machine learning techniques to fMRI data, it was possible to determine the semantic content encoded in the neural representations of object concepts at basic and subordinate levels of abstraction. The representation of basic-level concepts (e.g. bird) was spatially broad, encompassing sensorimotor brain areas that encode concrete object properties, and also language and heteromodal integrative areas that encode abstract semantic content. The representation of subordinate-level concepts (robin) was less widely distributed, concentrated in perceptual areas that underlie concrete content. Furthermore, basic-level concepts were representative of their subordinates in that they were neurally similar to their typical but not atypical subordinates (bird was neurally similar to robin but not woodpecker). The findings provide a brain-based account of the advantages that basic-level concepts enjoy in everyday life over subordinate-level concepts: the basic level is a broad topographical representation that encompasses both concrete and abstract semantic content, reflecting the multifaceted yet intuitive meaning of basic-level concepts. Copyright © 2017 Elsevier Inc. All rights reserved.
QR-decomposition based SENSE reconstruction using parallel architecture.
Ullah, Irfan; Nisar, Habab; Raza, Haseeb; Qasim, Malik; Inam, Omair; Omer, Hammad
2018-04-01
Magnetic Resonance Imaging (MRI) is a powerful medical imaging technique that provides essential clinical information about the human body. One major limitation of MRI is its long scan time. Implementation of advance MRI algorithms on a parallel architecture (to exploit inherent parallelism) has a great potential to reduce the scan time. Sensitivity Encoding (SENSE) is a Parallel Magnetic Resonance Imaging (pMRI) algorithm that utilizes receiver coil sensitivities to reconstruct MR images from the acquired under-sampled k-space data. At the heart of SENSE lies inversion of a rectangular encoding matrix. This work presents a novel implementation of GPU based SENSE algorithm, which employs QR decomposition for the inversion of the rectangular encoding matrix. For a fair comparison, the performance of the proposed GPU based SENSE reconstruction is evaluated against single and multicore CPU using openMP. Several experiments against various acceleration factors (AFs) are performed using multichannel (8, 12 and 30) phantom and in-vivo human head and cardiac datasets. Experimental results show that GPU significantly reduces the computation time of SENSE reconstruction as compared to multi-core CPU (approximately 12x speedup) and single-core CPU (approximately 53x speedup) without any degradation in the quality of the reconstructed images. Copyright © 2018 Elsevier Ltd. All rights reserved.
White-Light Optical Information Processing and Holography.
1984-06-22
Processing, Image Deblurring , Source Encoding, Signal Sampling, Coherence Measurement, Noise Performance, / Pseudocolor Encoding. , ’ ’ * .~ 10.ASS!RACT...o 2.1 Broad Spectral Band Color Image Deblurring .. . 4 2.2 Noise Performance ...... ...... .. . 4 2.3 Pseudocolor Encoding with Three Primary...spectra. This technique is particularly suitable for linear smeared color image deblurring . 2.2 Noise Performance In this period, we have also
A recursive technique for adaptive vector quantization
NASA Technical Reports Server (NTRS)
Lindsay, Robert A.
1989-01-01
Vector Quantization (VQ) is fast becoming an accepted, if not preferred method for image compression. The VQ performs well when compressing all types of imagery including Video, Electro-Optical (EO), Infrared (IR), Synthetic Aperture Radar (SAR), Multi-Spectral (MS), and digital map data. The only requirement is to change the codebook to switch the compressor from one image sensor to another. There are several approaches for designing codebooks for a vector quantizer. Adaptive Vector Quantization is a procedure that simultaneously designs codebooks as the data is being encoded or quantized. This is done by computing the centroid as a recursive moving average where the centroids move after every vector is encoded. When computing the centroid of a fixed set of vectors the resultant centroid is identical to the previous centroid calculation. This method of centroid calculation can be easily combined with VQ encoding techniques. The defined quantizer changes after every encoded vector by recursively updating the centroid of minimum distance which is the selected by the encoder. Since the quantizer is changing definition or states after every encoded vector, the decoder must now receive updates to the codebook. This is done as side information by multiplexing bits into the compressed source data.
SEU hardened memory cells for a CCSDS Reed Solomon encoder
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitaker, S.; Canaris, J.; Liu, K.
This paper reports on design technique to harden CMOS memory circuits against Single Event Upset (SEU) in the space environment. The design technique provides a recovery mechanism which is independent of the shape of the upsetting event. A RAM cell and Flip Flop design are presented to demonstrate the method. The Flip Flop was used in the control circuitry for a Reed Solomon encoder designed for the Space Station and Explorer platforms.
Plainchont, Bertrand; Pitoux, Daisy; Cyrille, Mathieu; Giraud, Nicolas
2018-02-06
We propose an original concept to measure accurately enantiomeric excesses on proton NMR spectra, which combines high-resolution techniques based on a spatial encoding of the sample, with the use of optically active weakly orienting solvents. We show that it is possible to simulate accurately dipolar edited spectra of enantiomers dissolved in a chiral liquid crystalline phase, and to use these simulations to calibrate integrations that can be measured on experimental data, in order to perform a quantitative chiral analysis. This approach is demonstrated on a chemical intermediate for which optical purity is an essential criterion. We find that there is a very good correlation between the experimental and calculated integration ratios extracted from G-SERF spectra, which paves the way to a general method of determination of enantiomeric excesses based on the observation of 1 H nuclei.
Optical multiple-image authentication based on cascaded phase filtering structure
NASA Astrophysics Data System (ADS)
Wang, Q.; Alfalou, A.; Brosseau, C.
2016-10-01
In this study, we report on the recent developments of optical image authentication algorithms. Compared with conventional optical encryption, optical image authentication achieves more security strength because such methods do not need to recover information of plaintext totally during the decryption period. Several recently proposed authentication systems are briefly introduced. We also propose a novel multiple-image authentication system, where multiple original images are encoded into a photon-limited encoded image by using a triple-plane based phase retrieval algorithm and photon counting imaging (PCI) technique. One can only recover a noise-like image using correct keys. To check authority of multiple images, a nonlinear fractional correlation is employed to recognize the original information hidden in the decrypted results. The proposal can be implemented optically using a cascaded phase filtering configuration. Computer simulation results are presented to evaluate the performance of this proposal and its effectiveness.
Remote creation of hybrid entanglement between particle-like and wave-like optical qubits
NASA Astrophysics Data System (ADS)
Morin, Olivier; Huang, Kun; Liu, Jianli; Le Jeannic, Hanna; Fabre, Claude; Laurat, Julien
2014-07-01
The wave-particle duality of light has led to two different encodings for optical quantum information processing. Several approaches have emerged based either on particle-like discrete-variable states (that is, finite-dimensional quantum systems) or on wave-like continuous-variable states (that is, infinite-dimensional systems). Here, we demonstrate the generation of entanglement between optical qubits of these different types, located at distant places and connected by a lossy channel. Such hybrid entanglement, which is a key resource for a variety of recently proposed schemes, including quantum cryptography and computing, enables information to be converted from one Hilbert space to the other via teleportation and therefore the connection of remote quantum processors based upon different encodings. Beyond its fundamental significance for the exploration of entanglement and its possible instantiations, our optical circuit holds promise for implementations of heterogeneous network, where discrete- and continuous-variable operations and techniques can be efficiently combined.
Ultrathin Nonlinear Metasurface for Optical Image Encoding.
Walter, Felicitas; Li, Guixin; Meier, Cedrik; Zhang, Shuang; Zentgraf, Thomas
2017-05-10
Security of optical information is of great importance in modern society. Many cryptography techniques based on classical and quantum optics have been widely explored in the linear optical regime. Nonlinear optical encryption in which encoding and decoding involve nonlinear frequency conversions represents a new strategy for securing optical information. Here, we demonstrate that an ultrathin nonlinear photonic metasurface, consisting of meta-atoms with 3-fold rotational symmetry, can be used to hide optical images under illumination with a fundamental wave. However, the hidden image can be read out from second harmonic generation (SHG) waves. This is achieved by controlling the destructive and constructive interferences of SHG waves from two neighboring meta-atoms. In addition, we apply this concept to obtain gray scale SHG imaging. Nonlinear metasurfaces based on space variant optical interference open new avenues for multilevel image encryption, anticounterfeiting, and background free image reconstruction.
A Novel Fast and Secure Approach for Voice Encryption Based on DNA Computing
NASA Astrophysics Data System (ADS)
Kakaei Kate, Hamidreza; Razmara, Jafar; Isazadeh, Ayaz
2018-06-01
Today, in the world of information communication, voice information has a particular importance. One way to preserve voice data from attacks is voice encryption. The encryption algorithms use various techniques such as hashing, chaotic, mixing, and many others. In this paper, an algorithm is proposed for voice encryption based on three different schemes to increase flexibility and strength of the algorithm. The proposed algorithm uses an innovative encoding scheme, the DNA encryption technique and a permutation function to provide a secure and fast solution for voice encryption. The algorithm is evaluated based on various measures including signal to noise ratio, peak signal to noise ratio, correlation coefficient, signal similarity and signal frequency content. The results demonstrate applicability of the proposed method in secure and fast encryption of voice files
NASA Astrophysics Data System (ADS)
Nazrul Islam, Mohammed; Karim, Mohammad A.; Vijayan Asari, K.
2013-09-01
Protecting and processing of confidential information, such as personal identification, biometrics, remains a challenging task for further research and development. A new methodology to ensure enhanced security of information in images through the use of encryption and multiplexing is proposed in this paper. We use orthogonal encoding scheme to encode multiple information independently and then combine them together to save storage space and transmission bandwidth. The encoded and multiplexed image is encrypted employing multiple reference-based joint transform correlation. The encryption key is fed into four channels which are relatively phase shifted by different amounts. The input image is introduced to all the channels and then Fourier transformed to obtain joint power spectra (JPS) signals. The resultant JPS signals are again phase-shifted and then combined to form a modified JPS signal which yields the encrypted image after having performed an inverse Fourier transformation. The proposed cryptographic system makes the confidential information absolutely inaccessible to any unauthorized intruder, while allows for the retrieval of the information to the respective authorized recipient without any distortion. The proposed technique is investigated through computer simulations under different practical conditions in order to verify its overall robustness.
Coman, Daniel; de Graaf, Robin A; Rothman, Douglas L; Hyder, Fahmeed
2013-11-01
Spectroscopic signals which emanate from complexes between paramagnetic lanthanide (III) ions (e.g. Tm(3+)) and macrocyclic chelates (e.g. 1,4,7,10-tetramethyl-1,4,7,10-tetraazacyclododecane-1,4,7,10-tetraacetate, or DOTMA(4-)) are sensitive to physiology (e.g. temperature). Because nonexchanging protons from these lanthanide-based macrocyclic agents have relaxation times on the order of a few milliseconds, rapid data acquisition is possible with chemical shift imaging (CSI). Thus, Biosensor Imaging of Redundant Deviation in Shifts (BIRDS) which originate from nonexchanging protons of these paramagnetic agents, but exclude water proton detection, can allow molecular imaging. Previous two-dimensional CSI experiments with such lanthanide-based macrocyclics allowed acquisition from ~12-μL voxels in rat brain within 5 min using rectangular encoding of k space. Because cubical encoding of k space in three dimensions for whole-brain coverage increases the CSI acquisition time to several tens of minutes or more, a faster CSI technique is required for BIRDS to be of practical use. Here, we demonstrate a CSI acquisition method to improve three-dimensional molecular imaging capabilities with lanthanide-based macrocyclics. Using TmDOTMA(-), we show datasets from a 20 × 20 × 20-mm(3) field of view with voxels of ~1 μL effective volume acquired within 5 min (at 11.7 T) for temperature mapping. By employing reduced spherical encoding with Gaussian weighting (RESEGAW) instead of cubical encoding of k space, a significant increase in CSI signal is obtained. In vitro and in vivo three-dimensional CSI data with TmDOTMA(-), and presumably similar lanthanide-based macrocyclics, suggest that acquisition using RESEGAW can be used for high spatiotemporal resolution molecular mapping with BIRDS. Copyright © 2013 John Wiley & Sons, Ltd.
Kawano, Tomonori
2013-01-01
There have been a wide variety of approaches for handling the pieces of DNA as the “unplugged” tools for digital information storage and processing, including a series of studies applied to the security-related area, such as DNA-based digital barcodes, water marks and cryptography. In the present article, novel designs of artificial genes as the media for storing the digitally compressed data for images are proposed for bio-computing purpose while natural genes principally encode for proteins. Furthermore, the proposed system allows cryptographical application of DNA through biochemically editable designs with capacity for steganographical numeric data embedment. As a model case of image-coding DNA technique application, numerically and biochemically combined protocols are employed for ciphering the given “passwords” and/or secret numbers using DNA sequences. The “passwords” of interest were decomposed into single letters and translated into the font image coded on the separate DNA chains with both the coding regions in which the images are encoded based on the novel run-length encoding rule, and the non-coding regions designed for biochemical editing and the remodeling processes revealing the hidden orientation of letters composing the original “passwords.” The latter processes require the molecular biological tools for digestion and ligation of the fragmented DNA molecules targeting at the polymerase chain reaction-engineered termini of the chains. Lastly, additional protocols for steganographical overwriting of the numeric data of interests over the image-coding DNA are also discussed. PMID:23750303
Image Coding Based on Address Vector Quantization.
NASA Astrophysics Data System (ADS)
Feng, Yushu
Image coding is finding increased application in teleconferencing, archiving, and remote sensing. This thesis investigates the potential of Vector Quantization (VQ), a relatively new source coding technique, for compression of monochromatic and color images. Extensions of the Vector Quantization technique to the Address Vector Quantization method have been investigated. In Vector Quantization, the image data to be encoded are first processed to yield a set of vectors. A codeword from the codebook which best matches the input image vector is then selected. Compression is achieved by replacing the image vector with the index of the code-word which produced the best match, the index is sent to the channel. Reconstruction of the image is done by using a table lookup technique, where the label is simply used as an address for a table containing the representative vectors. A code-book of representative vectors (codewords) is generated using an iterative clustering algorithm such as K-means, or the generalized Lloyd algorithm. A review of different Vector Quantization techniques are given in chapter 1. Chapter 2 gives an overview of codebook design methods including the Kohonen neural network to design codebook. During the encoding process, the correlation of the address is considered and Address Vector Quantization is developed for color image and monochrome image coding. Address VQ which includes static and dynamic processes is introduced in chapter 3. In order to overcome the problems in Hierarchical VQ, Multi-layer Address Vector Quantization is proposed in chapter 4. This approach gives the same performance as that of the normal VQ scheme but the bit rate is about 1/2 to 1/3 as that of the normal VQ method. In chapter 5, a Dynamic Finite State VQ based on a probability transition matrix to select the best subcodebook to encode the image is developed. In chapter 6, a new adaptive vector quantization scheme, suitable for color video coding, called "A Self -Organizing Adaptive VQ Technique" is presented. In addition to chapters 2 through 6 which report on new work, this dissertation includes one chapter (chapter 1) and part of chapter 2 which review previous work on VQ and image coding, respectively. Finally, a short discussion of directions for further research is presented in conclusion.
Sending Foreign Language Word Processor Files over Networks.
ERIC Educational Resources Information Center
Feustle, Joseph A., Jr.
1992-01-01
Advantages of using online systems are described, and specific techniques for successfully transmitting computer text files are described. Topics covered include Microsoft's Rich TextFile, WordPerfect encoding, text compression, and especially encoding and decoding with UNIX programs. (LB)
Recce imagery compression options
NASA Astrophysics Data System (ADS)
Healy, Donald J.
1995-09-01
The errors introduced into reconstructed RECCE imagery by ATARS DPCM compression are compared to those introduced by the more modern DCT-based JPEG compression algorithm. For storage applications in which uncompressed sensor data is available JPEG provides better mean-square-error performance while also providing more flexibility in the selection of compressed data rates. When ATARS DPCM compression has already been performed, lossless encoding techniques may be applied to the DPCM deltas to achieve further compression without introducing additional errors. The abilities of several lossless compression algorithms including Huffman, Lempel-Ziv, Lempel-Ziv-Welch, and Rice encoding to provide this additional compression of ATARS DPCM deltas are compared. It is shown that the amount of noise in the original imagery significantly affects these comparisons.
Autocalibrating motion-corrected wave-encoding for highly accelerated free-breathing abdominal MRI.
Chen, Feiyu; Zhang, Tao; Cheng, Joseph Y; Shi, Xinwei; Pauly, John M; Vasanawala, Shreyas S
2017-11-01
To develop a motion-robust wave-encoding technique for highly accelerated free-breathing abdominal MRI. A comprehensive 3D wave-encoding-based method was developed to enable fast free-breathing abdominal imaging: (a) auto-calibration for wave-encoding was designed to avoid extra scan for coil sensitivity measurement; (b) intrinsic butterfly navigators were used to track respiratory motion; (c) variable-density sampling was included to enable compressed sensing; (d) golden-angle radial-Cartesian hybrid view-ordering was incorporated to improve motion robustness; and (e) localized rigid motion correction was combined with parallel imaging compressed sensing reconstruction to reconstruct the highly accelerated wave-encoded datasets. The proposed method was tested on six subjects and image quality was compared with standard accelerated Cartesian acquisition both with and without respiratory triggering. Inverse gradient entropy and normalized gradient squared metrics were calculated, testing whether image quality was improved using paired t-tests. For respiratory-triggered scans, wave-encoding significantly reduced residual aliasing and blurring compared with standard Cartesian acquisition (metrics suggesting P < 0.05). For non-respiratory-triggered scans, the proposed method yielded significantly better motion correction compared with standard motion-corrected Cartesian acquisition (metrics suggesting P < 0.01). The proposed methods can reduce motion artifacts and improve overall image quality of highly accelerated free-breathing abdominal MRI. Magn Reson Med 78:1757-1766, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Bush, Nicholas E; Schroeder, Christopher L; Hobbs, Jennifer A; Yang, Anne ET; Huet, Lucie A; Solla, Sara A; Hartmann, Mitra JZ
2016-01-01
Tactile information available to the rat vibrissal system begins as external forces that cause whisker deformations, which in turn excite mechanoreceptors in the follicle. Despite the fundamental mechanical origin of tactile information, primary sensory neurons in the trigeminal ganglion (Vg) have often been described as encoding the kinematics (geometry) of object contact. Here we aimed to determine the extent to which Vg neurons encode the kinematics vs. mechanics of contact. We used models of whisker bending to quantify mechanical signals (forces and moments) at the whisker base while simultaneously monitoring whisker kinematics and recording single Vg units in both anesthetized rats and awake, body restrained rats. We employed a novel manual stimulation technique to deflect whiskers in a way that decouples kinematics from mechanics, and used Generalized Linear Models (GLMs) to show that Vg neurons more directly encode mechanical signals when the whisker is deflected in this decoupled stimulus space. DOI: http://dx.doi.org/10.7554/eLife.13969.001 PMID:27348221
Lin, Chao; Shen, Xueju; Wang, Zhisong; Zhao, Cheng
2014-06-20
We demonstrate a novel optical asymmetric cryptosystem based on the principle of elliptical polarized light linear truncation and a numerical reconstruction technique. The device of an array of linear polarizers is introduced to achieve linear truncation on the spatially resolved elliptical polarization distribution during image encryption. This encoding process can be characterized as confusion-based optical cryptography that involves no Fourier lens and diffusion operation. Based on the Jones matrix formalism, the intensity transmittance for this truncation is deduced to perform elliptical polarized light reconstruction based on two intensity measurements. Use of a quick response code makes the proposed cryptosystem practical, with versatile key sensitivity and fault tolerance. Both simulation and preliminary experimental results that support theoretical analysis are presented. An analysis of the resistance of the proposed method on a known public key attack is also provided.
Prediction-guided quantization for video tone mapping
NASA Astrophysics Data System (ADS)
Le Dauphin, Agnès.; Boitard, Ronan; Thoreau, Dominique; Olivier, Yannick; Francois, Edouard; LeLéannec, Fabrice
2014-09-01
Tone Mapping Operators (TMOs) compress High Dynamic Range (HDR) content to address Low Dynamic Range (LDR) displays. However, before reaching the end-user, this tone mapped content is usually compressed for broadcasting or storage purposes. Any TMO includes a quantization step to convert floating point values to integer ones. In this work, we propose to adapt this quantization, in the loop of an encoder, to reduce the entropy of the tone mapped video content. Our technique provides an appropriate quantization for each mode of both the Intra and Inter-prediction that is performed in the loop of a block-based encoder. The mode that minimizes a rate-distortion criterion uses its associated quantization to provide integer values for the rest of the encoding process. The method has been implemented in HEVC and was tested over two different scenarios: the compression of tone mapped LDR video content (using the HM10.0) and the compression of perceptually encoded HDR content (HM14.0). Results show an average bit-rate reduction under the same PSNR for all the sequences and TMO considered of 20.3% and 27.3% for tone mapped content and 2.4% and 2.7% for HDR content.
DrugECs: An Ensemble System with Feature Subspaces for Accurate Drug-Target Interaction Prediction
Jiang, Jinjian; Wang, Nian; Zhang, Jun
2017-01-01
Background Drug-target interaction is key in drug discovery, especially in the design of new lead compound. However, the work to find a new lead compound for a specific target is complicated and hard, and it always leads to many mistakes. Therefore computational techniques are commonly adopted in drug design, which can save time and costs to a significant extent. Results To address the issue, a new prediction system is proposed in this work to identify drug-target interaction. First, drug-target pairs are encoded with a fragment technique and the software “PaDEL-Descriptor.” The fragment technique is for encoding target proteins, which divides each protein sequence into several fragments in order and encodes each fragment with several physiochemical properties of amino acids. The software “PaDEL-Descriptor” creates encoding vectors for drug molecules. Second, the dataset of drug-target pairs is resampled and several overlapped subsets are obtained, which are then input into kNN (k-Nearest Neighbor) classifier to build an ensemble system. Conclusion Experimental results on the drug-target dataset showed that our method performs better and runs faster than the state-of-the-art predictors. PMID:28744468
Motion immune diffusion imaging using augmented MUSE (AMUSE) for high-resolution multi-shot EPI
Guhaniyogi, Shayan; Chu, Mei-Lan; Chang, Hing-Chiu; Song, Allen W.; Chen, Nan-kuei
2015-01-01
Purpose To develop new techniques for reducing the effects of microscopic and macroscopic patient motion in diffusion imaging acquired with high-resolution multi-shot EPI. Theory The previously reported Multiplexed Sensitivity Encoding (MUSE) algorithm is extended to account for macroscopic pixel misregistrations as well as motion-induced phase errors in a technique called Augmented MUSE (AMUSE). Furthermore, to obtain more accurate quantitative DTI measures in the presence of subject motion, we also account for the altered diffusion encoding among shots arising from macroscopic motion. Methods MUSE and AMUSE were evaluated on simulated and in vivo motion-corrupted multi-shot diffusion data. Evaluations were made both on the resulting imaging quality and estimated diffusion tensor metrics. Results AMUSE was found to reduce image blurring resulting from macroscopic subject motion compared to MUSE, but yielded inaccurate tensor estimations when neglecting the altered diffusion encoding. Including the altered diffusion encoding in AMUSE produced better estimations of diffusion tensors. Conclusion The use of AMUSE allows for improved image quality and diffusion tensor accuracy in the presence of macroscopic subject motion during multi-shot diffusion imaging. These techniques should facilitate future high-resolution diffusion imaging. PMID:25762216
NASA Technical Reports Server (NTRS)
Brooner, W. G.; Nichols, D. A.
1972-01-01
Development of a scheme for utilizing remote sensing technology in an operational program for regional land use planning and land resource management program applications. The scheme utilizes remote sensing imagery as one of several potential inputs to derive desired and necessary data, and considers several alternative approaches to the expansion and/or reduction and analysis of data, using automated data handling techniques. Within this scheme is a five-stage program development which includes: (1) preliminary coordination, (2) interpretation and encoding, (3) creation of data base files, (4) data analysis and generation of desired products, and (5) applications.
Compressive Sampling based Image Coding for Resource-deficient Visual Communication.
Liu, Xianming; Zhai, Deming; Zhou, Jiantao; Zhang, Xinfeng; Zhao, Debin; Gao, Wen
2016-04-14
In this paper, a new compressive sampling based image coding scheme is developed to achieve competitive coding efficiency at lower encoder computational complexity, while supporting error resilience. This technique is particularly suitable for visual communication with resource-deficient devices. At the encoder, compact image representation is produced, which is a polyphase down-sampled version of the input image; but the conventional low-pass filter prior to down-sampling is replaced by a local random binary convolution kernel. The pixels of the resulting down-sampled pre-filtered image are local random measurements and placed in the original spatial configuration. The advantages of local random measurements are two folds: 1) preserve high-frequency image features that are otherwise discarded by low-pass filtering; 2) remain a conventional image and can therefore be coded by any standardized codec to remove statistical redundancy of larger scales. Moreover, measurements generated by different kernels can be considered as multiple descriptions of the original image and therefore the proposed scheme has the advantage of multiple description coding. At the decoder, a unified sparsity-based soft-decoding technique is developed to recover the original image from received measurements in a framework of compressive sensing. Experimental results demonstrate that the proposed scheme is competitive compared with existing methods, with a unique strength of recovering fine details and sharp edges at low bit-rates.
Phased array ghost elimination.
Kellman, Peter; McVeigh, Elliot R
2006-05-01
Parallel imaging may be applied to cancel ghosts caused by a variety of distortion mechanisms, including distortions such as off-resonance or local flow, which are space variant. Phased array combining coefficients may be calculated that null ghost artifacts at known locations based on a constrained optimization, which optimizes SNR subject to the nulling constraint. The resultant phased array ghost elimination (PAGE) technique is similar to the method known as sensitivity encoding (SENSE) used for accelerated imaging; however, in this formulation is applied to full field-of-view (FOV) images. The phased array method for ghost elimination may result in greater flexibility in designing acquisition strategies. For example, in multi-shot EPI applications ghosts are typically mitigated by the use of an interleaved phase encode acquisition order. An alternative strategy is to use a sequential, non-interleaved phase encode order and cancel the resultant ghosts using PAGE parallel imaging. Cancellation of ghosts by means of phased array processing makes sequential, non-interleaved phase encode acquisition order practical, and permits a reduction in repetition time, TR, by eliminating the need for echo-shifting. Sequential, non-interleaved phase encode order has benefits of reduced distortion due to off-resonance, in-plane flow and EPI delay misalignment. Furthermore, the use of EPI with PAGE has inherent fat-water separation and has been used to provide off-resonance correction using a technique referred to as lipid elimination with an echo-shifting N/2-ghost acquisition (LEENA), and may further generalized using the multi-point Dixon method. Other applications of PAGE include cancelling ghosts which arise due to amplitude or phase variation during the approach to steady state. Parallel imaging requires estimates of the complex coil sensitivities. In vivo estimates may be derived by temporally varying the phase encode ordering to obtain a full k-space dataset in a scheme similar to the autocalibrating TSENSE method. This scheme is a generalization of the UNFOLD method used for removing aliasing in undersampled acquisitions. The more general scheme may be used to modulate each EPI ghost image to a separate temporal frequency as described in this paper. Copyright (c) 2006 John Wiley & Sons, Ltd.
Phased array ghost elimination
Kellman, Peter; McVeigh, Elliot R.
2007-01-01
Parallel imaging may be applied to cancel ghosts caused by a variety of distortion mechanisms, including distortions such as off-resonance or local flow, which are space variant. Phased array combining coefficients may be calculated that null ghost artifacts at known locations based on a constrained optimization, which optimizes SNR subject to the nulling constraint. The resultant phased array ghost elimination (PAGE) technique is similar to the method known as sensitivity encoding (SENSE) used for accelerated imaging; however, in this formulation is applied to full field-of-view (FOV) images. The phased array method for ghost elimination may result in greater flexibility in designing acquisition strategies. For example, in multi-shot EPI applications ghosts are typically mitigated by the use of an interleaved phase encode acquisition order. An alternative strategy is to use a sequential, non-interleaved phase encode order and cancel the resultant ghosts using PAGE parallel imaging. Cancellation of ghosts by means of phased array processing makes sequential, non-interleaved phase encode acquisition order practical, and permits a reduction in repetition time, TR, by eliminating the need for echo-shifting. Sequential, non-interleaved phase encode order has benefits of reduced distortion due to off-resonance, in-plane flow and EPI delay misalignment. Furthermore, the use of EPI with PAGE has inherent fat-water separation and has been used to provide off-resonance correction using a technique referred to as lipid elimination with an echo-shifting N/2-ghost acquisition (LEENA), and may further generalized using the multi-point Dixon method. Other applications of PAGE include cancelling ghosts which arise due to amplitude or phase variation during the approach to steady state. Parallel imaging requires estimates of the complex coil sensitivities. In vivo estimates may be derived by temporally varying the phase encode ordering to obtain a full k-space dataset in a scheme similar to the autocalibrating TSENSE method. This scheme is a generalization of the UNFOLD method used for removing aliasing in undersampled acquisitions. The more general scheme may be used to modulate each EPI ghost image to a separate temporal frequency as described in this paper. PMID:16705636
Potential digitization/compression techniques for Shuttle video
NASA Technical Reports Server (NTRS)
Habibi, A.; Batson, B. H.
1978-01-01
The Space Shuttle initially will be using a field-sequential color television system but it is possible that an NTSC color TV system may be used for future missions. In addition to downlink color TV transmission via analog FM links, the Shuttle will use a high resolution slow-scan monochrome system for uplink transmission of text and graphics information. This paper discusses the characteristics of the Shuttle video systems, and evaluates digitization and/or bandwidth compression techniques for the various links. The more attractive techniques for the downlink video are based on a two-dimensional DPCM encoder that utilizes temporal and spectral as well as the spatial correlation of the color TV imagery. An appropriate technique for distortion-free coding of the uplink system utilizes two-dimensional HCK codes.
NASA Astrophysics Data System (ADS)
Lin, Liangjie; Wei, Zhiliang; Yang, Jian; Lin, Yanqin; Chen, Zhong
2014-11-01
The spatial encoding technique can be used to accelerate the acquisition of multi-dimensional nuclear magnetic resonance spectra. However, with this technique, we have to make trade-offs between the spectral width and the resolution in the spatial encoding dimension (F1 dimension), resulting in the difficulty of covering large spectral widths while preserving acceptable resolutions for spatial encoding spectra. In this study, a selective shifting method is proposed to overcome the aforementioned drawback. This method is capable of narrowing spectral widths and improving spectral resolutions in spatial encoding dimensions by selectively shifting certain peaks in spectra of the ultrafast version of spin echo correlated spectroscopy (UFSECSY). This method can also serve as a powerful tool to obtain high-resolution correlated spectra in inhomogeneous magnetic fields for its resistance to any inhomogeneity in the F1 dimension inherited from UFSECSY. Theoretical derivations and experiments have been carried out to demonstrate performances of the proposed method. Results show that the spectral width in spatial encoding dimension can be reduced by shortening distances between cross peaks and axial peaks with the proposed method and the expected resolution improvement can be achieved. Finally, the shifting-absent spectrum can be recovered readily by post-processing.
Terminology model discovery using natural language processing and visualization techniques.
Zhou, Li; Tao, Ying; Cimino, James J; Chen, Elizabeth S; Liu, Hongfang; Lussier, Yves A; Hripcsak, George; Friedman, Carol
2006-12-01
Medical terminologies are important for unambiguous encoding and exchange of clinical information. The traditional manual method of developing terminology models is time-consuming and limited in the number of phrases that a human developer can examine. In this paper, we present an automated method for developing medical terminology models based on natural language processing (NLP) and information visualization techniques. Surgical pathology reports were selected as the testing corpus for developing a pathology procedure terminology model. The use of a general NLP processor for the medical domain, MedLEE, provides an automated method for acquiring semantic structures from a free text corpus and sheds light on a new high-throughput method of medical terminology model development. The use of an information visualization technique supports the summarization and visualization of the large quantity of semantic structures generated from medical documents. We believe that a general method based on NLP and information visualization will facilitate the modeling of medical terminologies.
Experimental scrambling and noise reduction applied to the optical encryption of QR codes.
Barrera, John Fredy; Vélez, Alejandro; Torroba, Roberto
2014-08-25
In this contribution, we implement two techniques to reinforce optical encryption, which we restrict in particular to the QR codes, but could be applied in a general encoding situation. To our knowledge, we present the first experimental-positional optical scrambling merged with an optical encryption procedure. The inclusion of an experimental scrambling technique in an optical encryption protocol, in particular dealing with a QR code "container", adds more protection to the encoding proposal. Additionally, a nonlinear normalization technique is applied to reduce the noise over the recovered images besides increasing the security against attacks. The opto-digital techniques employ an interferometric arrangement and a joint transform correlator encrypting architecture. The experimental results demonstrate the capability of the methods to accomplish the task.
Self-assembled bionanostructures: proteins following the lead of DNA nanostructures
2014-01-01
Natural polymers are able to self-assemble into versatile nanostructures based on the information encoded into their primary structure. The structural richness of biopolymer-based nanostructures depends on the information content of building blocks and the available biological machinery to assemble and decode polymers with a defined sequence. Natural polypeptides comprise 20 amino acids with very different properties in comparison to only 4 structurally similar nucleotides, building elements of nucleic acids. Nevertheless the ease of synthesizing polynucleotides with selected sequence and the ability to encode the nanostructural assembly based on the two specific nucleotide pairs underlay the development of techniques to self-assemble almost any selected three-dimensional nanostructure from polynucleotides. Despite more complex design rules, peptides were successfully used to assemble symmetric nanostructures, such as fibrils and spheres. While earlier designed protein-based nanostructures used linked natural oligomerizing domains, recent design of new oligomerizing interaction surfaces and introduction of the platform for topologically designed protein fold may enable polypeptide-based design to follow the track of DNA nanostructures. The advantages of protein-based nanostructures, such as the functional versatility and cost effective and sustainable production methods provide strong incentive for further development in this direction. PMID:24491139
PhAST: pharmacophore alignment search tool.
Hähnke, Volker; Hofmann, Bettina; Grgat, Tomislav; Proschak, Ewgenij; Steinhilber, Dieter; Schneider, Gisbert
2009-04-15
We present a ligand-based virtual screening technique (PhAST) for rapid hit and lead structure searching in large compound databases. Molecules are represented as strings encoding the distribution of pharmacophoric features on the molecular graph. In contrast to other text-based methods using SMILES strings, we introduce a new form of text representation that describes the pharmacophore of molecules. This string representation opens the opportunity for revealing functional similarity between molecules by sequence alignment techniques in analogy to homology searching in protein or nucleic acid sequence databases. We favorably compared PhAST with other current ligand-based virtual screening methods in a retrospective analysis using the BEDROC metric. In a prospective application, PhAST identified two novel inhibitors of 5-lipoxygenase product formation with minimal experimental effort. This outcome demonstrates the applicability of PhAST to drug discovery projects and provides an innovative concept of sequence-based compound screening with substantial scaffold hopping potential. 2008 Wiley Periodicals, Inc.
Neuroimaging techniques for memory detection: scientific, ethical, and legal issues.
Meegan, Daniel V
2008-01-01
There is considerable interest in the use of neuroimaging techniques for forensic purposes. Memory detection techniques, including the well-publicized Brain Fingerprinting technique (Brain Fingerprinting Laboratories, Inc., Seattle WA), exploit the fact that the brain responds differently to sensory stimuli to which it has been exposed before. When a stimulus is specifically associated with a crime, the resulting brain activity should differentiate between someone who was present at the crime and someone who was not. This article reviews the scientific literature on three such techniques: priming, old/new, and P300 effects. The forensic potential of these techniques is evaluated based on four criteria: specificity, automaticity, encoding flexibility, and longevity. This article concludes that none of the techniques are devoid of forensic potential, although much research is yet to be done. Ethical issues, including rights to privacy and against self-incrimination, are discussed. A discussion of legal issues concludes that current memory detection techniques do not yet meet United States standards of legal admissibility.
NASA Astrophysics Data System (ADS)
Yusuf, Y.; Hidayati, W.
2018-01-01
The process of identifying bacterial recombination using PCR, and restriction, and then sequencing process was done after identifying the bacteria. This research aimed to get a yeast cell of Pichia pastoris which has an encoder gene of stem bromelain enzyme. The production of recombinant stem bromelain enzymes using yeast cells of P. pastoris can produce pure bromelain rod enzymes and have the same conformation with the enzyme’s conformation in pineapple plants. This recombinant stem bromelain enzyme can be used as a therapeutic protein in inflammatory, cancer and degenerative diseases. This study was an early stage of a step series to obtain bromelain rod protein derived from pineapple made with genetic engineering techniques. This research was started by isolating the RNA of pineapple stem which was continued with constructing cDNA using reserve transcriptase-PCR technique (RT-PCR), doing the amplification of bromelain enzyme encoder gene with PCR technique using a specific premiere couple which was designed. The process was continued by cloning into bacterium cells of Escherichia coli. A vector which brought the encoder gene of stem bromelain enzyme was inserted into the yeast cell of P. pastoris and was continued by identifying the yeast cell of P. pastoris which brought the encoder gene of stem bromelain enzyme. The research has not found enzyme gene of stem bromelain in yeast cell of P. pastoris yet. The next step is repeating the process by buying new reagent; RNase inhibitor, and buying liquid nitrogen.
Variable word length encoder reduces TV bandwith requirements
NASA Technical Reports Server (NTRS)
Sivertson, W. E., Jr.
1965-01-01
Adaptive variable resolution encoding technique provides an adaptive compression pseudo-random noise signal processor for reducing television bandwidth requirements. Complementary processors are required in both the transmitting and receiving systems. The pretransmission processor is analog-to-digital, while the postreception processor is digital-to-analog.
Comparison of Spatiotemporal Mapping Techniques for Enormous Etl and Exploitation Patterns
NASA Astrophysics Data System (ADS)
Deiotte, R.; La Valley, R.
2017-10-01
The need to extract, transform, and exploit enormous volumes of spatiotemporal data has exploded with the rise of social media, advanced military sensors, wearables, automotive tracking, etc. However, current methods of spatiotemporal encoding and exploitation simultaneously limit the use of that information and increase computing complexity. Current spatiotemporal encoding methods from Niemeyer and Usher rely on a Z-order space filling curve, a relative of Peano's 1890 space filling curve, for spatial hashing and interleaving temporal hashes to generate a spatiotemporal encoding. However, there exist other space-filling curves, and that provide different manifold coverings that could promote better hashing techniques for spatial data and have the potential to map spatiotemporal data without interleaving. The concatenation of Niemeyer's and Usher's techniques provide a highly efficient space-time index. However, other methods have advantages and disadvantages regarding computational cost, efficiency, and utility. This paper explores the several methods using a range of sizes of data sets from 1K to 10M observations and provides a comparison of the methods.
Spatial Specificity in Spatiotemporal Encoding and Fourier Imaging
Goerke, Ute
2015-01-01
Purpose Ultrafast imaging techniques based on spatiotemporal-encoding (SPEN), such as RASER (rapid acquisition with sequential excitation and refocusing), is a promising new class of sequences since they are largely insensitive to magnetic field variations which cause signal loss and geometric distortion in EPI. So far, attempts to theoretically describe the point-spread-function (PSF) for the original SPEN-imaging techniques have yielded limited success. To fill this gap a novel definition for an apparent PSF is proposed. Theory Spatial resolution in SPEN-imaging is determined by the spatial phase dispersion imprinted on the acquired signal by a frequency-swept excitation or refocusing pulse. The resulting signal attenuation increases with larger distance from the vertex of the quadratic phase profile. Methods Bloch simulations and experiments were performed to validate theoretical derivations. Results The apparent PSF quantifies the fractional contribution of magnetization to a voxel’s signal as a function of distance to the voxel. In contrast, the conventional PSF represents the signal intensity at various locations. Conclusion The definition of the conventional PSF fails for SPEN-imaging since only the phase of isochromats, but not the amplitude of the signal varies. The concept of the apparent PSF is shown to be generalizable to conventional Fourier- imaging techniques. PMID:26712657
Peña, Raul; Ávila, Alfonso; Muñoz, David; Lavariega, Juan
2015-01-01
The recognition of clinical manifestations in both video images and physiological-signal waveforms is an important aid to improve the safety and effectiveness in medical care. Physicians can rely on video-waveform (VW) observations to recognize difficult-to-spot signs and symptoms. The VW observations can also reduce the number of false positive incidents and expand the recognition coverage to abnormal health conditions. The synchronization between the video images and the physiological-signal waveforms is fundamental for the successful recognition of the clinical manifestations. The use of conventional equipment to synchronously acquire and display the video-waveform information involves complex tasks such as the video capture/compression, the acquisition/compression of each physiological signal, and the video-waveform synchronization based on timestamps. This paper introduces a data hiding technique capable of both enabling embedding channels and synchronously hiding samples of physiological signals into encoded video sequences. Our data hiding technique offers large data capacity and simplifies the complexity of the video-waveform acquisition and reproduction. The experimental results revealed successful embedding and full restoration of signal's samples. Our results also demonstrated a small distortion in the video objective quality, a small increment in bit-rate, and embedded cost savings of -2.6196% for high and medium motion video sequences.
Seeland, Marco; Rzanny, Michael; Alaqraa, Nedal; Wäldchen, Jana; Mäder, Patrick
2017-01-01
Steady improvements of image description methods induced a growing interest in image-based plant species classification, a task vital to the study of biodiversity and ecological sensitivity. Various techniques have been proposed for general object classification over the past years and several of them have already been studied for plant species classification. However, results of these studies are selective in the evaluated steps of a classification pipeline, in the utilized datasets for evaluation, and in the compared baseline methods. No study is available that evaluates the main competing methods for building an image representation on the same datasets allowing for generalized findings regarding flower-based plant species classification. The aim of this paper is to comparatively evaluate methods, method combinations, and their parameters towards classification accuracy. The investigated methods span from detection, extraction, fusion, pooling, to encoding of local features for quantifying shape and color information of flower images. We selected the flower image datasets Oxford Flower 17 and Oxford Flower 102 as well as our own Jena Flower 30 dataset for our experiments. Findings show large differences among the various studied techniques and that their wisely chosen orchestration allows for high accuracies in species classification. We further found that true local feature detectors in combination with advanced encoding methods yield higher classification results at lower computational costs compared to commonly used dense sampling and spatial pooling methods. Color was found to be an indispensable feature for high classification results, especially while preserving spatial correspondence to gray-level features. In result, our study provides a comprehensive overview of competing techniques and the implications of their main parameters for flower-based plant species classification. PMID:28234999
Supporting Handoff in Asynchronous Collaborative Sensemaking Using Knowledge-Transfer Graphs.
Zhao, Jian; Glueck, Michael; Isenberg, Petra; Chevalier, Fanny; Khan, Azam
2018-01-01
During asynchronous collaborative analysis, handoff of partial findings is challenging because externalizations produced by analysts may not adequately communicate their investigative process. To address this challenge, we developed techniques to automatically capture and help encode tacit aspects of the investigative process based on an analyst's interactions, and streamline explicit authoring of handoff annotations. We designed our techniques to mediate awareness of analysis coverage, support explicit communication of progress and uncertainty with annotation, and implicit communication through playback of investigation histories. To evaluate our techniques, we developed an interactive visual analysis system, KTGraph, that supports an asynchronous investigative document analysis task. We conducted a two-phase user study to characterize a set of handoff strategies and to compare investigative performance with and without our techniques. The results suggest that our techniques promote the use of more effective handoff strategies, help increase an awareness of prior investigative process and insights, as well as improve final investigative outcomes.
Method and apparatus for optical encoding with compressible imaging
NASA Technical Reports Server (NTRS)
Leviton, Douglas B. (Inventor)
2006-01-01
The present invention presents an optical encoder with increased conversion rates. Improvement in the conversion rate is a result of combining changes in the pattern recognition encoder's scale pattern with an image sensor readout technique which takes full advantage of those changes, and lends itself to operation by modern, high-speed, ultra-compact microprocessors and digital signal processors (DSP) or field programmable gate array (FPGA) logic elements which can process encoder scale images at the highest speeds. Through these improvements, all three components of conversion time (reciprocal conversion rate)--namely exposure time, image readout time, and image processing time--are minimized.
USDA-ARS?s Scientific Manuscript database
Introduction: Matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOFMS)is increasingly utilized as a rapid technique to identify microorganisms including pathogenic bacteria. However, little attention has been paid to the significant proteomic information encoded in ...
Chang, Zheng; Xiang, Qing-San; Shen, Hao; Yin, Fang-Fang
2010-03-01
To accelerate non-contrast-enhanced MR angiography (MRA) with inflow inversion recovery (IFIR) with a fast imaging method, Skipped Phase Encoding and Edge Deghosting (SPEED). IFIR imaging uses a preparatory inversion pulse to reduce signals from static tissue, while leaving inflow arterial blood unaffected, resulting in sparse arterial vasculature on modest tissue background. By taking advantage of vascular sparsity, SPEED can be simplified with a single-layer model to achieve higher efficiency in both scan time reduction and image reconstruction. SPEED can also make use of information available in multiple coils for further acceleration. The techniques are demonstrated with a three-dimensional renal non-contrast-enhanced IFIR MRA study. Images are reconstructed by SPEED based on a single-layer model to achieve an undersampling factor of up to 2.5 using one skipped phase encoding direction. By making use of information available in multiple coils, SPEED can achieve an undersampling factor of up to 8.3 with four receiver coils. The reconstructed images generally have comparable quality as that of the reference images reconstructed from full k-space data. As demonstrated with a three-dimensional renal IFIR scan, SPEED based on a single-layer model is able to reduce scan time further and achieve higher computational efficiency than the original SPEED.
Perceptual distortion analysis of color image VQ-based coding
NASA Astrophysics Data System (ADS)
Charrier, Christophe; Knoblauch, Kenneth; Cherifi, Hocine
1997-04-01
It is generally accepted that a RGB color image can be easily encoded by using a gray-scale compression technique on each of the three color planes. Such an approach, however, fails to take into account correlations existing between color planes and perceptual factors. We evaluated several linear and non-linear color spaces, some introduced by the CIE, compressed with the vector quantization technique for minimum perceptual distortion. To study these distortions, we measured contrast and luminance of the video framebuffer, to precisely control color. We then obtained psychophysical judgements to measure how well these methods work to minimize perceptual distortion in a variety of color space.
Eliminating ambiguity in digital signals
NASA Technical Reports Server (NTRS)
Weber, W. J., III
1979-01-01
Multiamplitude minimum shift keying (mamsk) transmission system, method of differential encoding overcomes problem of ambiguity associated with advanced digital-transmission techniques with little or no penalty in transmission rate, error rate, or system complexity. Principle of method states, if signal points are properly encoded and decoded, bits are detected correctly, regardless of phase ambiguities.
Memory skills mediating superior memory in a world-class memorist.
Ericsson, K Anders; Cheng, Xiaojun; Pan, Yafeng; Ku, Yixuan; Ge, Yi; Hu, Yi
2017-10-01
Laboratory studies have investigated how individuals with normal memory spans attained digit spans over 80 digits after hundreds of hours of practice. Experimental analyses of their memory skills suggested that their attained memory spans were constrained by the encoding time, for the time needed will increase if the length of digit sequences to be memorised becomes longer. These constraints seemed to be violated by a world-class memorist, Feng Wang (FW), who won the World Memory Championship by recalling 300 digits presented at 1 digit/s. In several studies we examined FW's memory skills underlying his exceptional performance. First FW reproduced his superior memory span of 200 digits under laboratory condition, and we obtained his retrospective reports describing his encoding/retrieval processes (Experiment 1). Further experiments used self-paced memorisation to identify temporal characteristics of encoding of digits in 4-digit clusters (Experiment 2), and explored memory encoding at presentation speeds much faster than 1 digit/s (Experiment 3). FW's superiority over previous digit span experts is explained by his acquisition of well-known mnemonic techniques and his training that focused on rapid memorisation. His memory performance supports the feasibility of acquiring memory skills for improved working memory based on storage in long-term memory.
Qubits, qutrits, and ququads stored in single photons from an atom-cavity system
NASA Astrophysics Data System (ADS)
Holleczek, Annemarie; Barter, Oliver; Langfahl-Klabes, Gunnar; Kuhn, Axel
2015-03-01
One of today's challenge to realize computing based on quantum mechanics is to reliably and scalably encode information in quantum systems. Here, we present a photon source to on-demand deliver photonic quantum bits of information based on a strongly coupled atom-cavity system. It operates intermittently for periods of up to 100μs, with a single-photon repetition rate of 1MHz, and an intra-cavity production e!ciency of up to 85%. Due to the photons inherent coherence time of 500ns and our ability to arbitrarily shape their amplitude and phase profile we time-bin encode information within one photon. To do so, the spatio-temporal envelope of a single photon is sub-divided in d time bins which allows for the delivery of arbitrary qu-d-its. The latter is done with a fidelity of > 95% for qubits, and 94% for qutrits verified using a newly developed time-resolved quantum-homodyne technique.
Sankar, Punnaivanam; Aghila, Gnanasekaran
2007-01-01
The mechanism models for primary organic reactions encoding the structural fragments undergoing substitution, addition, elimination, and rearrangements are developed. In the proposed models, each and every structural component of mechanistic pathways is represented with flexible and fragment based markup technique in XML syntax. A significant feature of the system is the encoding of the electron movements along with the other components like charges, partial charges, half bonded species, lone pair electrons, free radicals, reaction arrows, etc. needed for a complete representation of reaction mechanism. The rendering of reaction schemes described with the proposed methodology is achieved with a concise XML extension language interoperating with the structure markup. The reaction scheme is visualized as 2D graphics in a browser by converting them into SVG documents enabling the desired layouts normally perceived by the chemists conventionally. An automatic representation of the complex patterns of the reaction mechanism is achieved by reusing the knowledge in chemical ontologies and developing artificial intelligence components in terms of axioms.
Yandayan, T; Geckeler, R D; Aksulu, M; Akgoz, S A; Ozgur, B
2016-05-01
The application of advanced error-separating shearing techniques to the precise calibration of autocollimators with Small Angle Generators (SAGs) was carried out for the first time. The experimental realization was achieved using the High Precision Small Angle Generator (HPSAG) of TUBITAK UME under classical dimensional metrology laboratory environmental conditions. The standard uncertainty value of 5 mas (24.2 nrad) reached by classical calibration method was improved to the level of 1.38 mas (6.7 nrad). Shearing techniques, which offer a unique opportunity to separate the errors of devices without recourse to any external standard, were first adapted by Physikalisch-Technische Bundesanstalt (PTB) to the calibration of autocollimators with angle encoders. It has been demonstrated experimentally in a clean room environment using the primary angle standard of PTB (WMT 220). The application of the technique to a different type of angle measurement system extends the range of the shearing technique further and reveals other advantages. For example, the angular scales of the SAGs are based on linear measurement systems (e.g., capacitive nanosensors for the HPSAG). Therefore, SAGs show different systematic errors when compared to angle encoders. In addition to the error-separation of HPSAG and the autocollimator, detailed investigations on error sources were carried out. Apart from determination of the systematic errors of the capacitive sensor used in the HPSAG, it was also demonstrated that the shearing method enables the unique opportunity to characterize other error sources such as errors due to temperature drift in long term measurements. This proves that the shearing technique is a very powerful method for investigating angle measuring systems, for their improvement, and for specifying precautions to be taken during the measurements.
NASA Astrophysics Data System (ADS)
Malik, Mehul
Over the past three decades, quantum mechanics has allowed the development of technologies that provide unconditionally secure communication. In parallel, the quantum nature of the transverse electromagnetic field has spawned the field of quantum imaging that encompasses technologies such as quantum lithography, quantum ghost imaging, and high-dimensional quantum key distribution (QKD). The emergence of such quantum technologies also highlights the need for the development of accurate and efficient methods of measuring and characterizing the elusive quantum state itself. In this thesis, I present new technologies that use the quantum properties of light for security. The first of these is a technique that extends the principles behind QKD to the field of imaging and optical ranging. By applying the polarization-based BB84 protocol to individual photons in an active imaging system, we obtained images that were secure against any intercept-resend jamming attacks. The second technology presented in this thesis is based on an extension of quantum ghost imaging, a technique that uses position-momentum entangled photons to create an image of an object without directly gaining any spatial information from it. We used a holographic filtering technique to build a quantum ghost image identification system that uses a few pairs of photons to identify an object from a set of known objects. The third technology addressed in this thesis is a high-dimensional QKD system that uses orbital-angular-momentum (OAM) modes of light for encoding. Moving to a high-dimensional state space in QKD allows one to impress more information on each photon, as well as introduce higher levels of security. I discuss the development of two OAM-QKD protocols based on the BB84 and Ekert protocols of QKD. In addition, I present a study characterizing the effects of turbulence on a communication system using OAM modes for encoding. The fourth and final technology presented in this thesis is a relatively new technique called direct measurement that uses sequential weak and strong measurements to characterize a quantum state. I use this technique to characterize the quantum state of a photon with a dimensionality of d = 27, and visualize its rotation in the natural basis of OAM.
Demb, J B; Desmond, J E; Wagner, A D; Vaidya, C J; Glover, G H; Gabrieli, J D
1995-09-01
Prefrontal cortical function was examined during semantic encoding and repetition priming using functional magnetic resonance imaging (fMRI), a noninvasive technique for localizing regional changes in blood oxygenation, a correlate of neural activity. Words studied in a semantic (deep) encoding condition were better remembered than words studied in both easier and more difficult nonsemantic (shallow) encoding conditions, with difficulty indexed by response time. The left inferior prefrontal cortex (LIPC) (Brodmann's areas 45, 46, 47) showed increased activation during semantic encoding relative to nonsemantic encoding regardless of the relative difficulty of the nonsemantic encoding task. Therefore, LIPC activation appears to be related to semantic encoding and not task difficulty. Semantic encoding decisions are performed faster the second time words are presented. This represents semantic repetition priming, a facilitation in semantic processing for previously encoded words that is not dependent on intentional recollection. The same LIPC area activated during semantic encoding showed decreased activation during repeated semantic encoding relative to initial semantic encoding of the same words. This decrease in activation during repeated encoding was process specific; it occurred when words were semantically reprocessed but not when words were nonsemantically reprocessed. The results were apparent in both individual and averaged functional maps. These findings suggest that the LIPC is part of a semantic executive system that contributes to the on-line retrieval of semantic information.
Flag-based detection of weak gas signatures in long-wave infrared hyperspectral image sequences
NASA Astrophysics Data System (ADS)
Marrinan, Timothy; Beveridge, J. Ross; Draper, Bruce; Kirby, Michael; Peterson, Chris
2016-05-01
We present a flag manifold based method for detecting chemical plumes in long-wave infrared hyperspectral movies. The method encodes temporal and spatial information related to a hyperspectral pixel into a flag, or nested sequence of linear subspaces. The technique used to create the flags pushes information about the background clutter, ambient conditions, and potential chemical agents into the leading elements of the flags. Exploiting this temporal information allows for a detection algorithm that is sensitive to the presence of weak signals. This method is compared to existing techniques qualitatively on real data and quantitatively on synthetic data to show that the flag-based algorithm consistently performs better on data when the SINRdB is low, and beats the ACE and MF algorithms in probability of detection for low probabilities of false alarm even when the SINRdB is high.
A High Performance Image Data Compression Technique for Space Applications
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu; Venbrux, Jack
2003-01-01
A highly performing image data compression technique is currently being developed for space science applications under the requirement of high-speed and pushbroom scanning. The technique is also applicable to frame based imaging data. The algorithm combines a two-dimensional transform with a bitplane encoding; this results in an embedded bit string with exact desirable compression rate specified by the user. The compression scheme performs well on a suite of test images acquired from spacecraft instruments. It can also be applied to three-dimensional data cube resulting from hyper-spectral imaging instrument. Flight qualifiable hardware implementations are in development. The implementation is being designed to compress data in excess of 20 Msampledsec and support quantization from 2 to 16 bits. This paper presents the algorithm, its applications and status of development.
Blind phase error suppression for color-encoded digital fringe projection profilometry
NASA Astrophysics Data System (ADS)
Ma, S.; Zhu, R.; Quan, C.; Li, B.; Tay, C. J.; Chen, L.
2012-04-01
Color-encoded digital fringe projection profilometry (CDFPP) has the advantage of fast speed, non-contact and full-field testing. It is one of the most important dynamic three-dimensional (3D) profile measurement techniques. However, due to factors such as color cross-talk and gamma distortion of electro-optical devices, phase errors arise when conventional phase-shifting algorithms with fixed phase shift values are utilized to retrieve phases. In this paper, a simple and effective blind phase error suppression approach based on isotropic n-dimensional fringe pattern normalization (INFPN) and carrier squeezing interferometry (CSI) is proposed. It does not require pre-calibration for the gamma and color-coupling coefficients or the phase shift values. Simulation and experimental works show that our proposed approach is able to effectively suppress phase errors and achieve accurate measurement results in CDFPP.
Entropy and Certainty in Lossless Data Compression
ERIC Educational Resources Information Center
Jacobs, James Jay
2009-01-01
Data compression is the art of using encoding techniques to represent data symbols using less storage space compared to the original data representation. The encoding process builds a relationship between the entropy of the data and the certainty of the system. The theoretical limits of this relationship are defined by the theory of entropy in…
Fast and memory efficient text image compression with JBIG2.
Ye, Yan; Cosman, Pamela
2003-01-01
In this paper, we investigate ways to reduce encoding time, memory consumption and substitution errors for text image compression with JBIG2. We first look at page striping where the encoder splits the input image into horizontal stripes and processes one stripe at a time. We propose dynamic dictionary updating procedures for page striping to reduce the bit rate penalty it incurs. Experiments show that splitting the image into two stripes can save 30% of encoding time and 40% of physical memory with a small coding loss of about 1.5%. Using more stripes brings further savings in time and memory but the return diminishes. We also propose an adaptive way to update the dictionary only when it has become out-of-date. The adaptive updating scheme can resolve the time versus bit rate tradeoff and the memory versus bit rate tradeoff well simultaneously. We then propose three speedup techniques for pattern matching, the most time-consuming encoding activity in JBIG2. When combined together, these speedup techniques can save up to 75% of the total encoding time with at most 1.7% of bit rate penalty. Finally, we look at improving reconstructed image quality for lossy compression. We propose enhanced prescreening and feature monitored shape unifying to significantly reduce substitution errors in the reconstructed images.
Lu, Emily; Elizondo-Riojas, Miguel-Angel; Chang, Jeffrey T; Volk, David E
2014-06-10
Next-generation sequencing results from bead-based aptamer libraries have demonstrated that traditional DNA/RNA alignment software is insufficient. This is particularly true for X-aptamers containing specialty bases (W, X, Y, Z, ...) that are identified by special encoding. Thus, we sought an automated program that uses the inherent design scheme of bead-based X-aptamers to create a hypothetical reference library and Markov modeling techniques to provide improved alignments. Aptaligner provides this feature as well as length error and noise level cutoff features, is parallelized to run on multiple central processing units (cores), and sorts sequences from a single chip into projects and subprojects.
Neural Network Grasping Controller for Continuum Robots
2006-01-01
string encoders attached to the base of section 1 and optical encoders located at the end plates of section 1 and 2. The cables from each of the...string encoders run the entire length of the arm through the optical encoders at the lower sections, as seen in Figure 1. This configuration enables the...encoders at the base section and the optical encoders at the end plates of the distal sections, there were a number of protrusions on the surface of the arm
Memory for self-generated narration in the elderly.
Drevenstedt, J; Bellezza, F S
1993-06-01
The story mnemonic technique, an effective encoding and retrieval strategy for young adults, was used as a procedure to study encoding and recall in elderly women. Experiment 1 (15 undergraduate and 14 elderly women) showed the technique to be reliable over 3 weeks and without practice effects in both age groups. In Experiment 2, 67 elderly women (mean age = 72 years) were found to make up 3 distinctive subgroupings in patterns of narration cohesiveness and recall accuracy, consistent with pilot data on the technique. A stepwise multiple regression equation found narration cohesiveness, an adaptation of the Daneman-Carpenter (1980) working-memory measure and vocabulary to predict word recall. Results suggested that a general memory factor differentiated the 3 elderly subgroups.
Discrete decoding based ultrafast multidimensional nuclear magnetic resonance spectroscopy
NASA Astrophysics Data System (ADS)
Wei, Zhiliang; Lin, Liangjie; Ye, Qimiao; Li, Jing; Cai, Shuhui; Chen, Zhong
2015-07-01
The three-dimensional (3D) nuclear magnetic resonance (NMR) spectroscopy constitutes an important and powerful tool in analyzing chemical and biological systems. However, the abundant 3D information arrives at the expense of long acquisition times lasting hours or even days. Therefore, there has been a continuous interest in developing techniques to accelerate recordings of 3D NMR spectra, among which the ultrafast spatiotemporal encoding technique supplies impressive acquisition speed by compressing a multidimensional spectrum in a single scan. However, it tends to suffer from tradeoffs among spectral widths in different dimensions, which deteriorates in cases of NMR spectroscopy with more dimensions. In this study, the discrete decoding is proposed to liberate the ultrafast technique from tradeoffs among spectral widths in different dimensions by focusing decoding on signal-bearing sites. For verifying its feasibility and effectiveness, we utilized the method to generate two different types of 3D spectra. The proposed method is also applicable to cases with more than three dimensions, which, based on the experimental results, may widen applications of the ultrafast technique.
NASA Astrophysics Data System (ADS)
Waghorn, Ben J.; Shah, Amish P.; Ngwa, Wilfred; Meeks, Sanford L.; Moore, Joseph A.; Siebers, Jeffrey V.; Langen, Katja M.
2010-07-01
Intra-fraction organ motion during intensity-modulated radiation therapy (IMRT) treatment can cause differences between the planned and the delivered dose distribution. To investigate the extent of these dosimetric changes, a computational model was developed and validated. The computational method allows for calculation of the rigid motion perturbed three-dimensional dose distribution in the CT volume and therefore a dose volume histogram-based assessment of the dosimetric impact of intra-fraction motion on a rigidly moving body. The method was developed and validated for both step-and-shoot IMRT and solid compensator IMRT treatment plans. For each segment (or beam), fluence maps were exported from the treatment planning system. Fluence maps were shifted according to the target position deduced from a motion track. These shifted, motion-encoded fluence maps were then re-imported into the treatment planning system and were used to calculate the motion-encoded dose distribution. To validate the accuracy of the motion-encoded dose distribution the treatment plan was delivered to a moving cylindrical phantom using a programmed four-dimensional motion phantom. Extended dose response (EDR-2) film was used to measure a planar dose distribution for comparison with the calculated motion-encoded distribution using a gamma index analysis (3% dose difference, 3 mm distance-to-agreement). A series of motion tracks incorporating both inter-beam step-function shifts and continuous sinusoidal motion were tested. The method was shown to accurately predict the film's dose distribution for all of the tested motion tracks, both for the step-and-shoot IMRT and compensator plans. The average gamma analysis pass rate for the measured dose distribution with respect to the calculated motion-encoded distribution was 98.3 ± 0.7%. For static delivery the average film-to-calculation pass rate was 98.7 ± 0.2%. In summary, a computational technique has been developed to calculate the dosimetric effect of intra-fraction motion. This technique has the potential to evaluate a given plan's sensitivity to anticipated organ motion. With knowledge of the organ's motion it can also be used as a tool to assess the impact of measured intra-fraction motion after dose delivery.
Method and system for efficient video compression with low-complexity encoder
NASA Technical Reports Server (NTRS)
Chen, Jun (Inventor); He, Dake (Inventor); Sheinin, Vadim (Inventor); Jagmohan, Ashish (Inventor); Lu, Ligang (Inventor)
2012-01-01
Disclosed are a method and system for video compression, wherein the video encoder has low computational complexity and high compression efficiency. The disclosed system comprises a video encoder and a video decoder, wherein the method for encoding includes the steps of converting a source frame into a space-frequency representation; estimating conditional statistics of at least one vector of space-frequency coefficients; estimating encoding rates based on the said conditional statistics; and applying Slepian-Wolf codes with the said computed encoding rates. The preferred method for decoding includes the steps of; generating a side-information vector of frequency coefficients based on previously decoded source data, encoder statistics, and previous reconstructions of the source frequency vector; and performing Slepian-Wolf decoding of at least one source frequency vector based on the generated side-information, the Slepian-Wolf code bits and the encoder statistics.
Rodriguez-Donate, Carlos; Morales-Velazquez, Luis; Osornio-Rios, Roque Alfredo; Herrera-Ruiz, Gilberto; de Jesus Romero-Troncoso, Rene
2010-01-01
Intelligent robotics demands the integration of smart sensors that allow the controller to efficiently measure physical quantities. Industrial manipulator robots require a constant monitoring of several parameters such as motion dynamics, inclination, and vibration. This work presents a novel smart sensor to estimate motion dynamics, inclination, and vibration parameters on industrial manipulator robot links based on two primary sensors: an encoder and a triaxial accelerometer. The proposed smart sensor implements a new methodology based on an oversampling technique, averaging decimation filters, FIR filters, finite differences and linear interpolation to estimate the interest parameters, which are computed online utilizing digital hardware signal processing based on field programmable gate arrays (FPGA).
Rodriguez-Donate, Carlos; Morales-Velazquez, Luis; Osornio-Rios, Roque Alfredo; Herrera-Ruiz, Gilberto; de Jesus Romero-Troncoso, Rene
2010-01-01
Intelligent robotics demands the integration of smart sensors that allow the controller to efficiently measure physical quantities. Industrial manipulator robots require a constant monitoring of several parameters such as motion dynamics, inclination, and vibration. This work presents a novel smart sensor to estimate motion dynamics, inclination, and vibration parameters on industrial manipulator robot links based on two primary sensors: an encoder and a triaxial accelerometer. The proposed smart sensor implements a new methodology based on an oversampling technique, averaging decimation filters, FIR filters, finite differences and linear interpolation to estimate the interest parameters, which are computed online utilizing digital hardware signal processing based on field programmable gate arrays (FPGA). PMID:22319345
Creation of hybrid optoelectronic systems for document identification
NASA Astrophysics Data System (ADS)
Muravsky, Leonid I.; Voronyak, Taras I.; Kulynych, Yaroslav P.; Maksymenko, Olexander P.; Pogan, Ignat Y.
2001-06-01
Use of security devices based on a joint transform correlator (JTC) architecture for identification of credit cards and other products is very promising. The experimental demonstration of the random phase encoding technique for security verification shows that hybrid JTCs can be successfully utilized. The random phase encoding technique provides a very high protection level of products and things to be identified. However, the realization of this technique is connected with overcoming of the certain practical problems. To solve some of these problems and simultaneously to improve the security of documents and other products, we propose to use a transformed phase mask (TPM) as an input object in an optical correlator. This mask is synthesized from a random binary pattern (RBP), which is directly used to fabricate a reference phase mask (RPM). To obtain the TPM, we previously separate the RBP on a several parts (for example, K parts) of an arbitrary shape and further fabricate the TPM from this transformed RBP. The fabricated TPM can be bonded as the optical mark to any product or thing to be identified. If the RPM and the TPM are placed on the optical correlator input, the first diffracted order of the output correlation signal is containing the K narrow autocorrelation peaks. The distances between the peaks and the peak's intensities can be treated as the terms of the identification feature vector (FV) for the TPM identification.
Soh, Nobuaki
2008-01-01
Site-specific chemical labeling utilizing small fluorescent molecules is a powerful and attractive technique for in vivo and in vitro analysis of cellular proteins, which can circumvent some problems in genetic encoding labeling by large fluorescent proteins. In particular, affinity labeling based on metal-chelation, advantageous due to the high selectivity/simplicity and the small tag-size, is promising, as well as enzymatic covalent labeling, thereby a variety of novel methods have been studied in recent years. This review describes the advances in chemical labeling of proteins, especially highlighting the metal-chelation methodology. PMID:27879749
Accelerated Slice Encoding for Metal Artifact Correction
Hargreaves, Brian A.; Chen, Weitian; Lu, Wenmiao; Alley, Marcus T.; Gold, Garry E.; Brau, Anja C. S.; Pauly, John M.; Pauly, Kim Butts
2010-01-01
Purpose To demonstrate accelerated imaging with artifact reduction near metallic implants and different contrast mechanisms. Materials and Methods Slice-encoding for metal artifact correction (SEMAC) is a modified spin echo sequence that uses view-angle tilting and slice-direction phase encoding to correct both in-plane and through-plane artifacts. Standard spin echo trains and short-TI inversion recovery (STIR) allow efficient PD-weighted imaging with optional fat suppression. A completely linear reconstruction allows incorporation of parallel imaging and partial Fourier imaging. The SNR effects of all reconstructions were quantified in one subject. 10 subjects with different metallic implants were scanned using SEMAC protocols, all with scan times below 11 minutes, as well as with standard spin echo methods. Results The SNR using standard acceleration techniques is unaffected by the linear SEMAC reconstruction. In all cases with implants, accelerated SEMAC significantly reduced artifacts compared with standard imaging techniques, with no additional artifacts from acceleration techniques. The use of different contrast mechanisms allowed differentiation of fluid from other structures in several subjects. Conclusion SEMAC imaging can be combined with standard echo-train imaging, parallel imaging, partial-Fourier imaging and inversion recovery techniques to offer flexible image contrast with a dramatic reduction of metal-induced artifacts in scan times under 11 minutes. PMID:20373445
Accelerated slice encoding for metal artifact correction.
Hargreaves, Brian A; Chen, Weitian; Lu, Wenmiao; Alley, Marcus T; Gold, Garry E; Brau, Anja C S; Pauly, John M; Pauly, Kim Butts
2010-04-01
To demonstrate accelerated imaging with both artifact reduction and different contrast mechanisms near metallic implants. Slice-encoding for metal artifact correction (SEMAC) is a modified spin echo sequence that uses view-angle tilting and slice-direction phase encoding to correct both in-plane and through-plane artifacts. Standard spin echo trains and short-TI inversion recovery (STIR) allow efficient PD-weighted imaging with optional fat suppression. A completely linear reconstruction allows incorporation of parallel imaging and partial Fourier imaging. The signal-to-noise ratio (SNR) effects of all reconstructions were quantified in one subject. Ten subjects with different metallic implants were scanned using SEMAC protocols, all with scan times below 11 minutes, as well as with standard spin echo methods. The SNR using standard acceleration techniques is unaffected by the linear SEMAC reconstruction. In all cases with implants, accelerated SEMAC significantly reduced artifacts compared with standard imaging techniques, with no additional artifacts from acceleration techniques. The use of different contrast mechanisms allowed differentiation of fluid from other structures in several subjects. SEMAC imaging can be combined with standard echo-train imaging, parallel imaging, partial-Fourier imaging, and inversion recovery techniques to offer flexible image contrast with a dramatic reduction of metal-induced artifacts in scan times under 11 minutes. (c) 2010 Wiley-Liss, Inc.
Design of frequency-encoded data-based optical master-slave-JK flip-flop using polarization switch
NASA Astrophysics Data System (ADS)
Mandal, Sumana; Mandal, Dhoumendra; Mandal, Mrinal Kanti; Garai, Sisir Kumar
2017-06-01
An optical data processing and communication system provides enormous potential bandwidth and a very high processing speed, and it can fulfill the demands of the present generation. For an optical computing system, several data processing units that work in the optical domain are essential. Memory elements are undoubtedly essential to storing any information. Optical flip-flops can store one bit of optical information. From these flip-flop registers, counters can be developed. Here, the authors proposed an optical master-slave (MS)-JK flip-flop with the help of two-input and three-input optical NAND gates. Optical NAND gates have been developed using semiconductor optical amplifiers (SOAs). The nonlinear polarization switching property of an SOA has been exploited here, and it acts as a polarization switch in the proposed scheme. A frequency encoding technique is adopted for representing data. A specific frequency of an optical signal represents a binary data bit. This technique of data representation is helpful because frequency is the fundamental property of a signal, and it remains unaltered during reflection, refraction, absorption, etc. throughout the data propagation. The simulated results enhance the admissibility of the scheme.
NQR: From imaging to explosives and drugs detection
NASA Astrophysics Data System (ADS)
Osán, Tristán M.; Cerioni, Lucas M. C.; Forguez, José; Ollé, Juan M.; Pusiol, Daniel J.
2007-02-01
The main aim of this work is to present an overview of the nuclear quadrupole resonance (NQR) spectroscopy capabilities for solid state imaging and detection of illegal substances, such as explosives and drugs. We briefly discuss the evolution of different NQR imaging techniques, in particular those involving spatial encoding which permit conservation of spectroscopic information. It has been shown that plastic explosives and other forbidden substances cannot be easily detected by means of conventional inspection techniques, such as those based on conventional X-ray technology. For this kind of applications, the experimental results show that the information inferred from NQR spectroscopy provides excellent means to perform volumetric and surface detection of dangerous explosive and drug compounds.
Synchronization trigger control system for flow visualization
NASA Technical Reports Server (NTRS)
Chun, K. S.
1987-01-01
The use of cinematography or holographic interferometry for dynamic flow visualization in an internal combustion engine requires a control device that globally synchronizes camera and light source timing at a predefined shaft encoder angle. The device is capable of 0.35 deg resolution for rotational speeds of up to 73 240 rpm. This was achieved by implementing the shaft encoder signal addressed look-up table (LUT) and appropriate latches. The developed digital signal processing technique achieves 25 nsec of high speed triggering angle detection by using direct parallel bit comparison of the shaft encoder digital code with a simulated angle reference code, instead of using angle value comparison which involves more complicated computation steps. In order to establish synchronization to an AC reference signal whose magnitude is variant with the rotating speed, a dynamic peak followup synchronization technique has been devised. This method scrutinizes the reference signal and provides the right timing within 40 nsec. Two application examples are described.
Maslennikova, I L; Kuznetsova, M V; Toplak, N; Nekrasova, I V; Žgur Bertok, D; Starčič Erjavec, M
2018-05-07
The efficiency of the bacteriocin, colicin ColE7, bacterial conjugation-based "kill" - "anti-kill" antimicrobial system, was assessed using real-time PCR, flow cytometry and bioluminescence. The ColE7 antimicrobial system consists of the genetically modified Escherichia coli strain Nissle 1917 harbouring a conjugative plasmid (derivative of the F-plasmid) encoding the "kill" gene (ColE7 activity gene) and a chromosomally encoded "anti-kill" gene (ColE7 immunity gene). On the basis of traJ gene expression in the killer donor cells, our results showed that the efficiency of the here studied antimicrobial system against target E. coli was higher at 4 than at 24 h. Flow cytometry was used to indirectly estimate DNase activity of the antimicrobial system, as lysis of target E. coli cells in the conjugative mixture with the killer donor strain led to reduction in cell cytosol fluorescence. According to a lux assay, E. coli TG1 (pXen lux + Ap r ) with constitutive luminescence were killed already after 2 h of treatment. Target sensor E. coli C600 with DNA damage SOS-inducible luminescence showed significantly lower SOS induction 6 and 24 h following treatment with the killer donor strain. Our results thus showed that bioluminescent techniques are quick and suitable for estimation of the ColE7 bacterial conjugation-based antimicrobial system antibacterial activity. Bacterial antimicrobial resistance is worldwide rising and causing deaths of thousands of patients infected with multi-drug resistant bacterial strains. In addition, there is a lack of efficient alternative antimicrobial agents. The significance of our research is the use of a number of methods (real-time PCR, flow cytometry and bioluminescence-based technique) to assess the antibacterial activity of the bacteriocin, colicin ColE7, bacterial conjugation-based "kill" - "anti-kill" antimicrobial system. Bioluminescent techniques proved to be rapid and suitable for estimation of antibacterial activity of ColE7 bacterial conjugation-based antimicrobial system and possibly other related systems. © 2018 The Society for Applied Microbiology.
Single-Molecule Encoders for Tracking Motor Proteins on DNA
NASA Astrophysics Data System (ADS)
Lipman, Everett A.
2012-02-01
Devices such as inkjet printers and disk drives track position and velocity using optical encoders, which produce periodic signals precisely synchronized with linear or rotational motion. We have implemented this technique at the nanometer scale by labeling DNA with regularly spaced fluorescent dyes. The resulting molecular encoders can be used in several ways for high-resolution continuous tracking of individual motor proteins. These measurements do not require mechanical coupling to macroscopic instrumentation, are automatically calibrated by the underlying structure of DNA, and depend on signal periodicity rather than absolute level. I will describe the synthesis of single-molecule encoders, data from and modeling of experiments on a helicase and a DNA polymerase, and some ideas for future work.
Micromirror array nanostructures for anticounterfeiting applications
NASA Astrophysics Data System (ADS)
Lee, Robert A.
2004-06-01
The optical characteristics of pixellated passive micro mirror arrays are derived and applied in the context of their use as reflective optically variable device (OVD) nanostructures for the protection of documents from counterfeiting. The traditional design variables of foil based diffractive OVDs are shown to be able to be mapped to a corresponding set of design parameters for reflective optical micro mirror array (OMMA) devices. The greatly increased depth characteristics of micro mirror array OVDs provides an opportunity for directly printing the OVD microstructure onto the security document in-line with the normal printing process. The micro mirror array OVD architecture therefore eliminates the need for hot stamping foil as the carrier of the OVD information, thereby reducing costs. The origination of micro mirror array devices via a palette based data format and a combination electron beam lithography and photolithography techniques is discussed via an artwork example and experimental tests. Finally the application of the technology to the design of a generic class of devices which have the interesting property of allowing for both application and customer specific OVD image encoding and data encoding at the end user stage of production is described. Because of the end user nature of the image and data encoding process these devices are particularly well suited to ID document applications and for this reason we refer this new OVD concept as biometric OVD technology.
Study of statistical coding for digital TV
NASA Technical Reports Server (NTRS)
Gardenhire, L. W.
1972-01-01
The results are presented for a detailed study to determine a pseudo-optimum statistical code to be installed in a digital TV demonstration test set. Studies of source encoding were undertaken, using redundancy removal techniques in which the picture is reproduced within a preset tolerance. A method of source encoding, which preliminary studies show to be encouraging, is statistical encoding. A pseudo-optimum code was defined and the associated performance of the code was determined. The format was fixed at 525 lines per frame, 30 frames per second, as per commercial standards.
Extended depth of field in an intrinsically wavefront-encoded biometric iris camera
NASA Astrophysics Data System (ADS)
Bergkoetter, Matthew D.; Bentley, Julie L.
2014-12-01
This work describes a design process which greatly increases the depth of field of a simple three-element lens system intended for biometric iris recognition. The system is optimized to produce a point spread function which is insensitive to defocus, so that recorded images may be deconvolved without knowledge of the exact object distance. This is essentially a variation on the technique of wavefront encoding, however the desired encoding effect is achieved by aberrations intrinsic to the lens system itself, without the need for a pupil phase mask.
An enhanced multi-channel bacterial foraging optimization algorithm for MIMO communication system
NASA Astrophysics Data System (ADS)
Palanimuthu, Senthilkumar Jayalakshmi; Muthial, Chandrasekaran
2017-04-01
Channel estimation and optimisation are the main challenging tasks in Multi Input Multi Output (MIMO) wireless communication systems. In this work, a Multi-Channel Bacterial Foraging Optimization Algorithm approach is proposed for the selection of antenna in a transmission area. The main advantage of this method is, it reduces the loss of bandwidth during data transmission effectively. Here, we considered the channel estimation and optimisation for improving the transmission speed and reducing the unused bandwidth. Initially, the message is given to the input of the communication system. Then, the symbol mapping process is performed for converting the message into signals. It will be encoded based on the space-time encoding technique. Here, the single signal is divided into multiple signals and it will be given to the input of space-time precoder. Hence, the multiplexing is applied to transmission channel estimation. In this paper, the Rayleigh channel is selected based on the bandwidth range. This is the Gaussian distribution type channel. Then, the demultiplexing is applied on the obtained signal that is the reverse function of multiplexing, which splits the combined signal arriving from a medium into the original information signal. Furthermore, the long-term evolution technique is used for scheduling the time to channels during transmission. Here, the hidden Markov model technique is employed to predict the status information of the channel. Finally, the signals are decoded and the reconstructed signal is obtained after performing the scheduling process. The experimental results evaluate the performance of the proposed MIMO communication system in terms of bit error rate, mean squared error, average throughput, outage capacity and signal to interference noise ratio.
An evaluation of consensus techniques for diagnostic interpretation
NASA Astrophysics Data System (ADS)
Sauter, Jake N.; LaBarre, Victoria M.; Furst, Jacob D.; Raicu, Daniela S.
2018-02-01
Learning diagnostic labels from image content has been the standard in computer-aided diagnosis. Most computer-aided diagnosis systems use low-level image features extracted directly from image content to train and test machine learning classifiers for diagnostic label prediction. When the ground truth for the diagnostic labels is not available, reference truth is generated from the experts diagnostic interpretations of the image/region of interest. More specifically, when the label is uncertain, e.g. when multiple experts label an image and their interpretations are different, techniques to handle the label variability are necessary. In this paper, we compare three consensus techniques that are typically used to encode the variability in the experts labeling of the medical data: mean, median and mode, and their effects on simple classifiers that can handle deterministic labels (decision trees) and probabilistic vectors of labels (belief decision trees). Given that the NIH/NCI Lung Image Database Consortium (LIDC) data provides interpretations for lung nodules by up to four radiologists, we leverage the LIDC data to evaluate and compare these consensus approaches when creating computer-aided diagnosis systems for lung nodules. First, low-level image features of nodules are extracted and paired with their radiologists semantic ratings (1= most likely benign, , 5 = most likely malignant); second, machine learning multi-class classifiers that handle deterministic labels (decision trees) and probabilistic vectors of labels (belief decision trees) are built to predict the lung nodules semantic ratings. We show that the mean-based consensus generates the most robust classi- fier overall when compared to the median- and mode-based consensus. Lastly, the results of this study show that, when building CAD systems with uncertain diagnostic interpretation, it is important to evaluate different strategies for encoding and predicting the diagnostic label.
A novel high-frequency encoding algorithm for image compression
NASA Astrophysics Data System (ADS)
Siddeq, Mohammed M.; Rodrigues, Marcos A.
2017-12-01
In this paper, a new method for image compression is proposed whose quality is demonstrated through accurate 3D reconstruction from 2D images. The method is based on the discrete cosine transform (DCT) together with a high-frequency minimization encoding algorithm at compression stage and a new concurrent binary search algorithm at decompression stage. The proposed compression method consists of five main steps: (1) divide the image into blocks and apply DCT to each block; (2) apply a high-frequency minimization method to the AC-coefficients reducing each block by 2/3 resulting in a minimized array; (3) build a look up table of probability data to enable the recovery of the original high frequencies at decompression stage; (4) apply a delta or differential operator to the list of DC-components; and (5) apply arithmetic encoding to the outputs of steps (2) and (4). At decompression stage, the look up table and the concurrent binary search algorithm are used to reconstruct all high-frequency AC-coefficients while the DC-components are decoded by reversing the arithmetic coding. Finally, the inverse DCT recovers the original image. We tested the technique by compressing and decompressing 2D images including images with structured light patterns for 3D reconstruction. The technique is compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results demonstrate that the proposed compression method is perceptually superior to JPEG with equivalent quality to JPEG2000. Concerning 3D surface reconstruction from images, it is demonstrated that the proposed method is superior to both JPEG and JPEG2000.
Exploring the influence of encoding format on subsequent memory.
Turney, Indira C; Dennis, Nancy A; Maillet, David; Rajah, M Natasha
2017-05-01
Distinctive encoding is greatly influenced by gist-based processes and has been shown to suffer when highly similar items are presented in close succession. Thus, elucidating the mechanisms underlying how presentation format affects gist processing is essential in determining the factors that influence these encoding processes. The current study utilised multivariate partial least squares (PLS) analysis to identify encoding networks directly associated with retrieval performance in a blocked and intermixed presentation condition. Subsequent memory analysis for successfully encoded items indicated no significant differences between reaction time and retrieval performance and presentation format. Despite no significant behavioural differences, behaviour PLS revealed differences in brain-behaviour correlations and mean condition activity in brain regions associated with gist-based vs. distinctive encoding. Specifically, the intermixed format encouraged more distinctive encoding, showing increased activation of regions associated with strategy use and visual processing (e.g., frontal and visual cortices, respectively). Alternatively, the blocked format exhibited increased gist-based processes, accompanied by increased activity in the right inferior frontal gyrus. Together, results suggest that the sequence that information is presented during encoding affects the degree to which distinctive encoding is engaged. These findings extend our understanding of the Fuzzy Trace Theory and the role of presentation format on encoding processes.
Known-plaintext attack on the double phase encoding and its implementation with parallel hardware
NASA Astrophysics Data System (ADS)
Wei, Hengzheng; Peng, Xiang; Liu, Haitao; Feng, Songlin; Gao, Bruce Z.
2008-03-01
A known-plaintext attack on the double phase encryption scheme implemented with parallel hardware is presented. The double random phase encoding (DRPE) is one of the most representative optical cryptosystems developed in mid of 90's and derives quite a few variants since then. Although the DRPE encryption system has a strong power resisting to a brute-force attack, the inherent architecture of DRPE leaves a hidden trouble due to its linearity nature. Recently the real security strength of this opto-cryptosystem has been doubted and analyzed from the cryptanalysis point of view. In this presentation, we demonstrate that the optical cryptosystems based on DRPE architecture are vulnerable to known-plain text attack. With this attack the two encryption keys in the DRPE can be accessed with the help of the phase retrieval technique. In our approach, we adopt hybrid input-output algorithm (HIO) to recover the random phase key in the object domain and then infer the key in frequency domain. Only a plaintext-ciphertext pair is sufficient to create vulnerability. Moreover this attack does not need to select particular plaintext. The phase retrieval technique based on HIO is an iterative process performing Fourier transforms, so it fits very much into the hardware implementation of the digital signal processor (DSP). We make use of the high performance DSP to accomplish the known-plaintext attack. Compared with the software implementation, the speed of the hardware implementation is much fast. The performance of this DSP-based cryptanalysis system is also evaluated.
Cates, Joshua W; Bieniosek, Matthew F; Levin, Craig S
2017-01-01
Maintaining excellent timing resolution in the generation of silicon photomultiplier (SiPM)-based time-of-flight positron emission tomography (TOF-PET) systems requires a large number of high-speed, high-bandwidth electronic channels and components. To minimize the cost and complexity of a system's back-end architecture and data acquisition, many analog signals are often multiplexed to fewer channels using techniques that encode timing, energy, and position information. With progress in the development SiPMs having lower dark noise, after pulsing, and cross talk along with higher photodetection efficiency, a coincidence timing resolution (CTR) well below 200 ps FWHM is now easily achievable in single pixel, bench-top setups using 20-mm length, lutetium-based inorganic scintillators. However, multiplexing the output of many SiPMs to a single channel will significantly degrade CTR without appropriate signal processing. We test the performance of a PET detector readout concept that multiplexes 16 SiPMs to two channels. One channel provides timing information with fast comparators, and the second channel encodes both position and energy information in a time-over-threshold-based pulse sequence. This multiplexing readout concept was constructed with discrete components to process signals from a [Formula: see text] array of SensL MicroFC-30035 SiPMs coupled to [Formula: see text] Lu 1.8 Gd 0.2 SiO 5 (LGSO):Ce (0.025 mol. %) scintillators. This readout method yielded a calibrated, global energy resolution of 15.3% FWHM at 511 keV with a CTR of [Formula: see text] FWHM between the 16-pixel multiplexed detector array and a [Formula: see text] LGSO-SiPM reference detector. In summary, results indicate this multiplexing scheme is a scalable readout technique that provides excellent coincidence timing performance.
Ghost artifact cancellation using phased array processing.
Kellman, P; McVeigh, E R
2001-08-01
In this article, a method for phased array combining is formulated which may be used to cancel ghosts caused by a variety of distortion mechanisms, including space variant distortions such as local flow or off-resonance. This method is based on a constrained optimization, which optimizes SNR subject to the constraint of nulling ghost artifacts at known locations. The resultant technique is similar to the method known as sensitivity encoding (SENSE) used for accelerated imaging; however, in this formulation it is applied to full field-of-view (FOV) images. The method is applied to multishot EPI with noninterleaved phase encode acquisition. A number of benefits, as compared to the conventional interleaved approach, are reduced distortion due to off-resonance, in-plane flow, and EPI delay misalignment, as well as eliminating the need for echo-shifting. Experimental results demonstrate the cancellation for both phantom as well as cardiac imaging examples.
Ghost Artifact Cancellation Using Phased Array Processing
Kellman, Peter; McVeigh, Elliot R.
2007-01-01
In this article, a method for phased array combining is formulated which may be used to cancel ghosts caused by a variety of distortion mechanisms, including space variant distortions such as local flow or off-resonance. This method is based on a constrained optimization, which optimizes SNR subject to the constraint of nulling ghost artifacts at known locations. The resultant technique is similar to the method known as sensitivity encoding (SENSE) used for accelerated imaging; however, in this formulation it is applied to full field-of-view (FOV) images. The method is applied to multishot EPI with noninterleaved phase encode acquisition. A number of benefits, as compared to the conventional interleaved approach, are reduced distortion due to off-resonance, in-plane flow, and EPI delay misalignment, as well as eliminating the need for echo-shifting. Experimental results demonstrate the cancellation for both phantom as well as cardiac imaging examples. PMID:11477638
A Passive Wireless Multi-Sensor SAW Technology Device and System Perspectives
Malocha, Donald C.; Gallagher, Mark; Fisher, Brian; Humphries, James; Gallagher, Daniel; Kozlovski, Nikolai
2013-01-01
This paper will discuss a SAW passive, wireless multi-sensor system under development by our group for the past several years. The device focus is on orthogonal frequency coded (OFC) SAW sensors, which use both frequency diversity and pulse position reflectors to encode the device ID and will be briefly contrasted to other embodiments. A synchronous correlator transceiver is used for the hardware and post processing and correlation techniques of the received signal to extract the sensor information will be presented. Critical device and system parameters addressed include encoding, operational range, SAW device parameters, post-processing, and antenna-SAW device integration. A fully developed 915 MHz OFC SAW multi-sensor system is used to show experimental results. The system is based on a software radio approach that provides great flexibility for future enhancements and diverse sensor applications. Several different sensor types using the OFC SAW platform are shown. PMID:23666124
A genetically encoded fluorescent sensor of ERK activity.
Harvey, Christopher D; Ehrhardt, Anka G; Cellurale, Cristina; Zhong, Haining; Yasuda, Ryohei; Davis, Roger J; Svoboda, Karel
2008-12-09
The activity of the ERK has complex spatial and temporal dynamics that are important for the specificity of downstream effects. However, current biochemical techniques do not allow for the measurement of ERK signaling with fine spatiotemporal resolution. We developed a genetically encoded, FRET-based sensor of ERK activity (the extracellular signal-regulated kinase activity reporter, EKAR), optimized for signal-to-noise ratio and fluorescence lifetime imaging. EKAR selectively and reversibly reported ERK activation in HEK293 cells after epidermal growth factor stimulation. EKAR signals were correlated with ERK phosphorylation, required ERK activity, and did not report the activities of JNK or p38. EKAR reported ERK activation in the dendrites and nucleus of hippocampal pyramidal neurons in brain slices after theta-burst stimuli or trains of back-propagating action potentials. EKAR therefore permits the measurement of spatiotemporal ERK signaling dynamics in living cells, including in neuronal compartments in intact tissues.
Three-dimensional imaging of cultural heritage artifacts with holographic printers
NASA Astrophysics Data System (ADS)
Kang, Hoonjong; Stoykova, Elena; Berberova, Nataliya; Park, Jiyong; Nazarova, Dimana; Park, Joo Sup; Kim, Youngmin; Hong, Sunghee; Ivanov, Branimir; Malinowski, Nikola
2016-01-01
Holography is defined as a two-steps process of capture and reconstruction of the light wavefront scattered from three-dimensional (3D) objects. Capture of the wavefront is possible due to encoding of both amplitude and phase in the hologram as a result of interference of the light beam coming from the object and mutually coherent reference beam. Three-dimensional imaging provided by holography motivates development of digital holographic imaging methods based on computer generation of holograms as a holographic display or a holographic printer. The holographic printing technique relies on combining digital 3D object representation and encoding of the holographic data with recording of analog white light viewable reflection holograms. The paper considers 3D contents generation for a holographic stereogram printer and a wavefront printer as a means of analogue recording of specific artifacts which are complicated objects with regards to conventional analog holography restrictions.
Wiens, Curtis N.; Artz, Nathan S.; Jang, Hyungseok; McMillan, Alan B.; Reeder, Scott B.
2017-01-01
Purpose To develop an externally calibrated parallel imaging technique for three-dimensional multispectral imaging (3D-MSI) in the presence of metallic implants. Theory and Methods A fast, ultrashort echo time (UTE) calibration acquisition is proposed to enable externally calibrated parallel imaging techniques near metallic implants. The proposed calibration acquisition uses a broadband radiofrequency (RF) pulse to excite the off-resonance induced by the metallic implant, fully phase-encoded imaging to prevent in-plane distortions, and UTE to capture rapidly decaying signal. The performance of the externally calibrated parallel imaging reconstructions was assessed using phantoms and in vivo examples. Results Phantom and in vivo comparisons to self-calibrated parallel imaging acquisitions show that significant reductions in acquisition times can be achieved using externally calibrated parallel imaging with comparable image quality. Acquisition time reductions are particularly large for fully phase-encoded methods such as spectrally resolved fully phase-encoded three-dimensional (3D) fast spin-echo (SR-FPE), in which scan time reductions of up to 8 min were obtained. Conclusion A fully phase-encoded acquisition with broadband excitation and UTE enabled externally calibrated parallel imaging for 3D-MSI, eliminating the need for repeated calibration regions at each frequency offset. Significant reductions in acquisition time can be achieved, particularly for fully phase-encoded methods like SR-FPE. PMID:27403613
A new gradient shimming method based on undistorted field map of B0 inhomogeneity.
Bao, Qingjia; Chen, Fang; Chen, Li; Song, Kan; Liu, Zao; Liu, Chaoyang
2016-04-01
Most existing gradient shimming methods for NMR spectrometers estimate field maps that resolve B0 inhomogeneity spatially from dual gradient-echo (GRE) images acquired at different echo times. However, the distortions induced by B0 inhomogeneity that always exists in the GRE images can result in estimated field maps that are distorted in both geometry and intensity, leading to inaccurate shimming. This work proposes a new gradient shimming method based on undistorted field map of B0 inhomogeneity obtained by a more accurate field map estimation technique. Compared to the traditional field map estimation method, this new method exploits both the positive and negative polarities of the frequency encoded gradients to eliminate the distortions caused by B0 inhomogeneity in the field map. Next, the corresponding automatic post-data procedure is introduced to obtain undistorted B0 field map based on knowledge of the invariant characteristics of the B0 inhomogeneity and the variant polarity of the encoded gradient. The experimental results on both simulated and real gradient shimming tests demonstrate the high performance of this new method. Copyright © 2015 Elsevier Inc. All rights reserved.
Two Pathways to Stimulus Encoding in Category Learning?
Davis, Tyler; Love, Bradley C.; Maddox, W. Todd
2008-01-01
Category learning theorists tacitly assume that stimuli are encoded by a single pathway. Motivated by theories of object recognition, we evaluate a dual-pathway account of stimulus encoding. The part-based pathway establishes mappings between sensory input and symbols that encode discrete stimulus features, whereas the image-based pathway applies holistic templates to sensory input. Our experiments use rule-plus-exception structures in which one exception item in each category violates a salient regularity and must be distinguished from other items. In Experiment 1, we find that discrete representations are crucial for recognition of exceptions following brief training. Experiments 2 and 3 involve multi-session training regimens designed to encourage either part or image-based encoding. We find that both pathways are able to support exception encoding, but have unique characteristics. We speculate that one advantage of the part-based pathway is the ability to generalize across domains, whereas the image-based pathway provides faster and more effortless recognition. PMID:19460948
Multispectral data compression through transform coding and block quantization
NASA Technical Reports Server (NTRS)
Ready, P. J.; Wintz, P. A.
1972-01-01
Transform coding and block quantization techniques are applied to multispectral aircraft scanner data, and digitized satellite imagery. The multispectral source is defined and an appropriate mathematical model proposed. The Karhunen-Loeve, Fourier, and Hadamard encoders are considered and are compared to the rate distortion function for the equivalent Gaussian source and to the performance of the single sample PCM encoder.
Absolute angular encoder based on optical diffraction
NASA Astrophysics Data System (ADS)
Wu, Jian; Zhou, Tingting; Yuan, Bo; Wang, Liqiang
2015-08-01
A new encoding method for absolute angular encoder based on optical diffraction was proposed in the present study. In this method, an encoder disc is specially designed that a series of elements are uniformly spaced in one circle and each element is consisted of four diffraction gratings, which are tilted in the directions of 30°, 60°, -60° and -30°, respectively. The disc is illuminated by a coherent light and the diffractive signals are received. The positions of diffractive spots are used for absolute encoding and their intensities are for subdivision, which is different from the traditional optical encoder based on transparent/opaque binary principle. Since the track's width in the disc is not limited in the diffraction pattern, it provides a new way to solve the contradiction between the size and resolution, which is good for minimization of encoder. According to the proposed principle, the diffraction pattern disc with a diameter of 40 mm was made by lithography in the glass substrate. A prototype of absolute angular encoder with a resolution of 20" was built up. Its maximum error was tested as 78" by comparing with a small angle measuring system based on laser beam deflection.
Fine-pitched microgratings encoded by interference of UV femtosecond laser pulses.
Kamioka, Hayato; Miura, Taisuke; Kawamura, Ken-ichi; Hirano, Masahiro; Hosono, Hideo
2002-01-01
Fine-pitched microgratings are encoded on fused silica surfaces by a two-beam laser interference technique employing UV femtosecond pulses from the third harmonics of a Ti:sapphire laser. A pump and prove method utilizing a laser-induced optical Kerr effect or transient optical absorption change has been developed to achieve the time coincidence of the two pulses. Use of the UV pulses makes it possible to narrow the grating pitches to an opening as small as 290 nm, and the groove width of the gratings is of nanoscale size. The present technique provides a novel opportunity for the fabrication of periodic nanoscale structures in various materials.
Projecting non-diffracting waves with intermediate-plane holography.
Mondal, Argha; Yevick, Aaron; Blackburn, Lauren C; Kanellakopoulos, Nikitas; Grier, David G
2018-02-19
We introduce intermediate-plane holography, which substantially improves the ability of holographic trapping systems to project propagation-invariant modes of light using phase-only diffractive optical elements. Translating the mode-forming hologram to an intermediate plane in the optical train can reduce the need to encode amplitude variations in the field, and therefore complements well-established techniques for encoding complex-valued transfer functions into phase-only holograms. Compared to standard holographic trapping implementations, intermediate-plane holograms greatly improve diffraction efficiency and mode purity of propagation-invariant modes, and so increase their useful non-diffracting range. We demonstrate this technique through experimental realizations of accelerating modes and long-range tractor beams.
DOE Office of Scientific and Technical Information (OSTI.GOV)
MAGEE,GLEN I.
Computers transfer data in a number of different ways. Whether through a serial port, a parallel port, over a modem, over an ethernet cable, or internally from a hard disk to memory, some data will be lost. To compensate for that loss, numerous error detection and correction algorithms have been developed. One of the most common error correction codes is the Reed-Solomon code, which is a special subset of BCH (Bose-Chaudhuri-Hocquenghem) linear cyclic block codes. In the AURA project, an unmanned aircraft sends the data it collects back to earth so it can be analyzed during flight and possible flightmore » modifications made. To counter possible data corruption during transmission, the data is encoded using a multi-block Reed-Solomon implementation with a possibly shortened final block. In order to maximize the amount of data transmitted, it was necessary to reduce the computation time of a Reed-Solomon encoding to three percent of the processor's time. To achieve such a reduction, many code optimization techniques were employed. This paper outlines the steps taken to reduce the processing time of a Reed-Solomon encoding and the insight into modern optimization techniques gained from the experience.« less
NASA Astrophysics Data System (ADS)
Kafashi, Sajad
A need for dynamic micro-particle manipulation is the ability to position fragile particles without damaging them, for instance biological particles like blood cells, stem cells, neurons, pancreatic ? cells, DNA, chromosomes, for repeated measurement without altering their behavior. An oscillating fiber will induce vortices in a slurry of particles, subsequently the vortex force created by this oscillation attracts and traps the particles located at steady streaming micro-eddies. If multiple oscillatory fibers are placed inside the slurry, depending on frequency and timing of oscillation this method can be used for contact-free particle shepherding and sorting and for transporting particles from one location to another. Due to the complicated dynamics of particles traveling in the fluid and the presence of noise, and significant number of particles, attempts to use commercial PIV softwares to track individual particle paths could not discriminate real particles from noise interference. To enhance identification and tracking of individual particles a novel encoded-particle tracking velocimetry (ePTV) technique is developed in this dissertation work and used in the experiments to track the particle trajectories. An analytic model is developed to determine the number of lost particles due to the finite image size based on a calculation of the probability that imaged particles of a specific mean velocity or having a uniform velocity distribution and encoding pattern will exit the field of view. The encoded pulse technique has been implemented in experiments for which images containing 100-200 objects including encoded trajectories have been measured. Using the developed ePTV algorithm approximately 30 % of the identified objects were classified as an encoded particle trajectory. Two types of oscillation mechanism are used in the experimental component of this study, a PZT flexure-based macro-probe driven at frequencies around 250 Hz and higher frequency dynamic-absorber, quartz-based, micro-probes driven at frequencies around 32 kHz. Two models for predicting the frequency response of micro-scale oscillatory probes are developed in this dissertation. In these studies, the attached fibers were either 75 mum diameter tungsten or 7 mum diameter carbon with lengths ranging from around 1 to 15 mm. The oscillators used in these experiments were commercial 32.768 kHz quartz tuning forks. Theoretical predictions of the values of the natural frequencies for different vibration modes show an asymptotic relationship with the length and a linear relationship with the diameter of the attached fiber. Similar results are observed from experiment, one with a tungsten probe having an initial fiber length of 14.11 mm incrementally etched down to 0.83 mm, and another tungsten probe of length 8.16 mm incrementally etched in diameter, in both cases using chronocoulometry to determine incremental volumetric material removal. Of particular relevance is that, when a 'zero' is observed in the response of the tine, one mode of the fiber is matched to the tine frequency and is acting as an absorber. This represents an optimal condition for contact sensing and for transferring energy to the fiber for fluid mixing, touch sensing and surface modification applications. Consequently the parametric models developed in this dissertation can be utilized for designing probes of arbitrary sizes thereby eliminating the empirical trial and error previously used.
Jo, Yeong Deuk; Ha, Yeaseong; Lee, Joung-Ho; Park, Minkyu; Bergsma, Alex C; Choi, Hong-Il; Goritschnig, Sandra; Kloosterman, Bjorn; van Dijk, Peter J; Choi, Doil; Kang, Byoung-Cheorl
2016-10-01
Using fine mapping techniques, the genomic region co-segregating with Restorer - of - fertility ( Rf ) in pepper was delimited to a region of 821 kb in length. A PPR gene in this region, CaPPR6 , was identified as a strong candidate for Rf based on expression pattern and characteristics of encoding sequence. Cytoplasmic-genic male sterility (CGMS) has been used for the efficient production of hybrid seeds in peppers (Capsicum annuum L.). Although the mitochondrial candidate genes that might be responsible for cytoplasmic male sterility (CMS) have been identified, the nuclear Restorer-of-fertility (Rf) gene has not been isolated. To identify the genomic region co-segregating with Rf in pepper, we performed fine mapping using an Rf-segregating population consisting of 1068 F2 individuals, based on BSA-AFLP and a comparative mapping approach. Through six cycles of chromosome walking, the co-segregating region harboring the Rf locus was delimited to be within 821 kb of sequence. Prediction of expressed genes in this region based on transcription analysis revealed four candidate genes. Among these, CaPPR6 encodes a pentatricopeptide repeat (PPR) protein with PPR motifs that are repeated 14 times. Characterization of the CaPPR6 protein sequence, based on alignment with other homologs, showed that CaPPR6 is a typical Rf-like (RFL) gene reported to have undergone diversifying selection during evolution. A marker developed from a sequence near CaPPR6 showed a higher prediction rate of the Rf phenotype than those of previously developed markers when applied to a panel of breeding lines of diverse origin. These results suggest that CaPPR6 is a strong candidate for the Rf gene in pepper.
Photocontrollable Fluorescent Proteins for Superresolution Imaging
Shcherbakova, Daria M.; Sengupta, Prabuddha; Lippincott-Schwartz, Jennifer; Verkhusha, Vladislav V.
2014-01-01
Superresolution fluorescence microscopy permits the study of biological processes at scales small enough to visualize fine subcellular structures that are unresolvable by traditional diffraction-limited light microscopy. Many superresolution techniques, including those applicable to live cell imaging, utilize genetically encoded photocontrollable fluorescent proteins. The fluorescence of these proteins can be controlled by light of specific wavelengths. In this review, we discuss the biochemical and photophysical properties of photocontrollable fluorescent proteins that are relevant to their use in superresolution microscopy. We then describe the recently developed photoactivatable, photoswitchable, and reversibly photoswitchable fluorescent proteins, and we detail their particular usefulness in single-molecule localization–based and nonlinear ensemble–based superresolution techniques. Finally, we discuss recent applications of photocontrollable proteins in superresolution imaging, as well as how these applications help to clarify properties of intracellular structures and processes that are relevant to cell and developmental biology, neuroscience, cancer biology and biomedicine. PMID:24895855
Lossless compression of VLSI layout image data.
Dai, Vito; Zakhor, Avideh
2006-09-01
We present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4), which integrates the advantages of two very disparate compression techniques: context-based modeling and Lempel-Ziv (LZ) style copying. While the algorithm can be applied to many lossless compression applications, such as document image compression, our primary target application has been lossless compression of integrated circuit layout image data. These images contain a heterogeneous mix of data: dense repetitive data better suited to LZ-style coding, and less dense structured data, better suited to context-based encoding. As part of C4, we have developed a novel binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding, and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and two-dimensional LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for gray-pixel image data.
Testud, Frederik; Gallichan, Daniel; Layton, Kelvin J; Barmet, Christoph; Welz, Anna M; Dewdney, Andrew; Cocosco, Chris A; Pruessmann, Klaas P; Hennig, Jürgen; Zaitsev, Maxim
2015-03-01
PatLoc (Parallel Imaging Technique using Localized Gradients) accelerates imaging and introduces a resolution variation across the field-of-view. Higher-dimensional encoding employs more spatial encoding magnetic fields (SEMs) than the corresponding image dimensionality requires, e.g. by applying two quadratic and two linear spatial encoding magnetic fields to reconstruct a 2D image. Images acquired with higher-dimensional single-shot trajectories can exhibit strong artifacts and geometric distortions. In this work, the source of these artifacts is analyzed and a reliable correction strategy is derived. A dynamic field camera was built for encoding field calibration. Concomitant fields of linear and nonlinear spatial encoding magnetic fields were analyzed. A combined basis consisting of spherical harmonics and concomitant terms was proposed and used for encoding field calibration and image reconstruction. A good agreement between the analytical solution for the concomitant fields and the magnetic field simulations of the custom-built PatLoc SEM coil was observed. Substantial image quality improvements were obtained using a dynamic field camera for encoding field calibration combined with the proposed combined basis. The importance of trajectory calibration for single-shot higher-dimensional encoding is demonstrated using the combined basis including spherical harmonics and concomitant terms, which treats the concomitant fields as an integral part of the encoding. © 2014 Wiley Periodicals, Inc.
Measuring Time-of-Flight in an Ultrasonic LPS System Using Generalized Cross-Correlation
Villladangos, José Manuel; Ureña, Jesús; García, Juan Jesús; Mazo, Manuel; Hernández, Álvaro; Jiménez, Ana; Ruíz, Daniel; De Marziani, Carlos
2011-01-01
In this article, a time-of-flight detection technique in the frequency domain is described for an ultrasonic Local Positioning System (LPS) based on encoded beacons. Beacon transmissions have been synchronized and become simultaneous by means of the DS-CDMA (Direct-Sequence Code Division Multiple Access) technique. Every beacon has been associated to a 255-bit Kasami code. The detection of signal arrival instant at the receiver, from which the distance to each beacon can be obtained, is based on the application of the Generalized Cross-Correlation (GCC), by using the cross-spectral density between the received signal and the sequence to be detected. Prior filtering to enhance the frequency components around the carrier frequency (40 kHz) has improved estimations when obtaining the correlation function maximum, which implies an improvement in distance measurement precision. Positioning has been achieved by using hyperbolic trilateration, based on the Time Differences of Arrival (TDOA) between a reference beacon and the others. PMID:22346645
Measuring time-of-flight in an ultrasonic LPS system using generalized cross-correlation.
Villladangos, José Manuel; Ureña, Jesús; García, Juan Jesús; Mazo, Manuel; Hernández, Alvaro; Jiménez, Ana; Ruíz, Daniel; De Marziani, Carlos
2011-01-01
In this article, a time-of-flight detection technique in the frequency domain is described for an ultrasonic local positioning system (LPS) based on encoded beacons. Beacon transmissions have been synchronized and become simultaneous by means of the DS-CDMA (direct-sequence code Division multiple access) technique. Every beacon has been associated to a 255-bit Kasami code. The detection of signal arrival instant at the receiver, from which the distance to each beacon can be obtained, is based on the application of the generalized cross-correlation (GCC), by using the cross-spectral density between the received signal and the sequence to be detected. Prior filtering to enhance the frequency components around the carrier frequency (40 kHz) has improved estimations when obtaining the correlation function maximum, which implies an improvement in distance measurement precision. Positioning has been achieved by using hyperbolic trilateration, based on the time differences of arrival (TDOA) between a reference beacon and the others.
NASA Astrophysics Data System (ADS)
Al-Hayani, Nazar; Al-Jawad, Naseer; Jassim, Sabah A.
2014-05-01
Video compression and encryption became very essential in a secured real time video transmission. Applying both techniques simultaneously is one of the challenges where the size and the quality are important in multimedia transmission. In this paper we proposed a new technique for video compression and encryption. Both encryption and compression are based on edges extracted from the high frequency sub-bands of wavelet decomposition. The compression algorithm based on hybrid of: discrete wavelet transforms, discrete cosine transform, vector quantization, wavelet based edge detection, and phase sensing. The compression encoding algorithm treats the video reference and non-reference frames in two different ways. The encryption algorithm utilized A5 cipher combined with chaotic logistic map to encrypt the significant parameters and wavelet coefficients. Both algorithms can be applied simultaneously after applying the discrete wavelet transform on each individual frame. Experimental results show that the proposed algorithms have the following features: high compression, acceptable quality, and resistance to the statistical and bruteforce attack with low computational processing.
NASA Technical Reports Server (NTRS)
Chien, S.
1994-01-01
This paper describes work on the Multimission VICAR Planner (MVP) system to automatically construct executable image processing procedures for custom image processing requests for the JPL Multimission Image Processing Lab (MIPL). This paper focuses on two issues. First, large search spaces caused by complex plans required the use of hand encoded control information. In order to address this in a manner similar to that used by human experts, MVP uses a decomposition-based planner to implement hierarchical/skeletal planning at the higher level and then uses a classical operator based planner to solve subproblems in contexts defined by the high-level decomposition.
Arduino Due based tool to facilitate in vivo two-photon excitation microscopy.
Artoni, Pietro; Landi, Silvia; Sato, Sebastian Sulis; Luin, Stefano; Ratto, Gian Michele
2016-04-01
Two-photon excitation spectroscopy is a powerful technique for the characterization of the optical properties of genetically encoded and synthetic fluorescent molecules. Excitation spectroscopy requires tuning the wavelength of the Ti:sapphire laser while carefully monitoring the delivered power. To assist laser tuning and the control of delivered power, we developed an Arduino Due based tool for the automatic acquisition of high quality spectra. This tool is portable, fast, affordable and precise. It allowed studying the impact of scattering and of blood absorption on two-photon excitation light. In this way, we determined the wavelength-dependent deformation of excitation spectra occurring in deep tissues in vivo.
Nature of the optical information recorded in speckles
NASA Astrophysics Data System (ADS)
Sciammarella, Cesar A.
1998-09-01
The process of encoding displacement information in electronic Holographic Interferometry is reviewed. Procedures to extend the applicability of this technique to large deformations are given. The proposed techniques are applied and results from these experiments are compared with results obtained by other means. The similarity between the two sets of results illustrates the validity for the new techniques.
Centric scan SPRITE for spin density imaging of short relaxation time porous materials.
Chen, Quan; Halse, Meghan; Balcom, Bruce J
2005-02-01
The single-point ramped imaging with T1 enhancement (SPRITE) imaging technique has proven to be a very robust and flexible method for the study of a wide range of systems with short signal lifetimes. As a pure phase encoding technique, SPRITE is largely immune to image distortions generated by susceptibility variations, chemical shift and paramagnetic impurities. In addition, it avoids the line width restrictions on resolution common to time-based sampling, frequency encoding methods. The standard SPRITE technique is however a longitudinal steady-state imaging method; the image intensity is related to the longitudinal steady state, which not only decreases the signal-to-noise ratio, but also introduces many parameters into the image signal equation. A centric scan strategy for SPRITE removes the longitudinal steady state from the image intensity equation and increases the inherent image intensity. Two centric scan SPRITE methods, that is, Spiral-SPRITE and Conical-SPRITE, with fast acquisition and greatly reduced gradient duty cycle, are outlined. Multiple free induction decay (FID) points may be acquired during SPRITE sampling for signal averaging to increase signal-to-noise ratio or for T2* and spin density mapping without an increase in acquisition time. Experimental results show that most porous sedimentary rock and concrete samples have a single exponential T2* decay due to susceptibility difference-induced field distortion. Inhomogeneous broadening thus dominates, which suggests that spin density imaging can be easily obtained by SPRITE.
A survey of artificial immune system based intrusion detection.
Yang, Hua; Li, Tao; Hu, Xinlei; Wang, Feng; Zou, Yang
2014-01-01
In the area of computer security, Intrusion Detection (ID) is a mechanism that attempts to discover abnormal access to computers by analyzing various interactions. There is a lot of literature about ID, but this study only surveys the approaches based on Artificial Immune System (AIS). The use of AIS in ID is an appealing concept in current techniques. This paper summarizes AIS based ID methods from a new view point; moreover, a framework is proposed for the design of AIS based ID Systems (IDSs). This framework is analyzed and discussed based on three core aspects: antibody/antigen encoding, generation algorithm, and evolution mode. Then we collate the commonly used algorithms, their implementation characteristics, and the development of IDSs into this framework. Finally, some of the future challenges in this area are also highlighted.
Encoding Orientation and the Remembering of Schizophrenic Young Adults
ERIC Educational Resources Information Center
Koh, Soon D.; Peterson, Rolf A.
1978-01-01
This research examines different types of encoding strategies, in addition to semantic and organizational encodings, and their effects on schizophrenics' remembering. Based on Craik and Lockhart (1972), i.e., memory performance is a function of depth of encoding processing, this analysis compares schizophrenics' encoding processing with that of…
Efficient morse decompositions of vector fields.
Chen, Guoning; Mischaikow, Konstantin; Laramee, Robert S; Zhang, Eugene
2008-01-01
Existing topology-based vector field analysis techniques rely on the ability to extract the individual trajectories such as fixed points, periodic orbits, and separatrices that are sensitive to noise and errors introduced by simulation and interpolation. This can make such vector field analysis unsuitable for rigorous interpretations. We advocate the use of Morse decompositions, which are robust with respect to perturbations, to encode the topological structures of a vector field in the form of a directed graph, called a Morse connection graph (MCG). While an MCG exists for every vector field, it need not be unique. Previous techniques for computing MCG's, while fast, are overly conservative and usually results in MCG's that are too coarse to be useful for the applications. To address this issue, we present a new technique for performing Morse decomposition based on the concept of tau-maps, which typically provides finer MCG's than existing techniques. Furthermore, the choice of tau provides a natural tradeoff between the fineness of the MCG's and the computational costs. We provide efficient implementations of Morse decomposition based on tau-maps, which include the use of forward and backward mapping techniques and an adaptive approach in constructing better approximations of the images of the triangles in the meshes used for simulation.. Furthermore, we propose the use of spatial tau-maps in addition to the original temporal tau-maps. These techniques provide additional trade-offs between the quality of the MCGs and the speed of computation. We demonstrate the utility of our technique with various examples in the plane and on surfaces including engine simulation data sets.
Overview of Nonelectronic Eye-Gaze Communication Techniques.
ERIC Educational Resources Information Center
Goossens, Carol A.; Crain, Sharon S.
1987-01-01
The article discusses currently used eye gaze communication techniques with the severely physically disabled (eye-gaze vest, laptray, transparent display, and mirror/prism communicator), presents information regarding the types of message displays used to depict encoded material, and discusses the advantages of implementing nonelectronic eye-gaze…
Making long-term memories in minutes: a spaced learning pattern from memory research in education
Kelley, Paul; Whatson, Terry
2013-01-01
Memory systems select from environmental stimuli those to encode permanently. Repeated stimuli separated by timed spaces without stimuli can initiate Long-Term Potentiation (LTP) and long-term memory (LTM) encoding. These processes occur in time scales of minutes, and have been demonstrated in many species. This study reports on using a specific timed pattern of three repeated stimuli separated by 10 min spaces drawn from both behavioral and laboratory studies of LTP and LTM encoding. A technique was developed based on this pattern to test whether encoding complex information into LTM in students was possible using the pattern within a very short time scale. In an educational context, stimuli were periods of highly compressed instruction, and spaces were created through 10 min distractor activities. Spaced Learning in this form was used as the only means of instruction for a national curriculum Biology course, and led to very rapid LTM encoding as measured by the high-stakes test for the course. Remarkably, learning at a greatly increased speed and in a pattern that included deliberate distraction produced significantly higher scores than random answers (p < 0.00001) and scores were not significantly different for experimental groups (one hour spaced learning) and control groups (four months teaching). Thus learning per hour of instruction, as measured by the test, was significantly higher for the spaced learning groups (p < 0.00001). In a third condition, spaced learning was used to replace the end of course review for one of two examinations. Results showed significantly higher outcomes for the course using spaced learning (p < 0.0005). The implications of these findings and further areas for research are briefly considered. PMID:24093012
Wiens, Curtis N; Artz, Nathan S; Jang, Hyungseok; McMillan, Alan B; Reeder, Scott B
2017-06-01
To develop an externally calibrated parallel imaging technique for three-dimensional multispectral imaging (3D-MSI) in the presence of metallic implants. A fast, ultrashort echo time (UTE) calibration acquisition is proposed to enable externally calibrated parallel imaging techniques near metallic implants. The proposed calibration acquisition uses a broadband radiofrequency (RF) pulse to excite the off-resonance induced by the metallic implant, fully phase-encoded imaging to prevent in-plane distortions, and UTE to capture rapidly decaying signal. The performance of the externally calibrated parallel imaging reconstructions was assessed using phantoms and in vivo examples. Phantom and in vivo comparisons to self-calibrated parallel imaging acquisitions show that significant reductions in acquisition times can be achieved using externally calibrated parallel imaging with comparable image quality. Acquisition time reductions are particularly large for fully phase-encoded methods such as spectrally resolved fully phase-encoded three-dimensional (3D) fast spin-echo (SR-FPE), in which scan time reductions of up to 8 min were obtained. A fully phase-encoded acquisition with broadband excitation and UTE enabled externally calibrated parallel imaging for 3D-MSI, eliminating the need for repeated calibration regions at each frequency offset. Significant reductions in acquisition time can be achieved, particularly for fully phase-encoded methods like SR-FPE. Magn Reson Med 77:2303-2309, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Tomographic Aperture-Encoded Particle Tracking Velocimetry: A New Approach to Volumetric PIV
NASA Astrophysics Data System (ADS)
Troolin, Dan; Boomsma, Aaron; Lai, Wing; Pothos, Stamatios; Fluid Mechanics Research Instruments Team
2016-11-01
Volumetric velocity fields are useful in a wide variety of fluid mechanics applications. Several types of three-dimensional imaging methods have been used in the past to varying degrees of success, for example, 3D PTV (Maas et al., 1993), DDPIV (Peireira et al., 2006), Tomographic PIV (Elsinga, 2006), and V3V (Troolin and Longmire, 2009), among others. Each of these techniques has shown advantages and disadvantages in different areas. With the advent of higher resolution and lower noise cameras with higher stability levels, new techniques are emerging that combine the advantages of the existing techniques. This talk describes a new technique called Tomographic Aperture-Encoded Particle Tracking Velocimetry (TAPTV), in which segmented triangulation and diameter tolerance are used to achieve three-dimensional particle tracking with extremely high particle densities (on the order of ppp = 0.2 or higher) without the drawbacks normally associated with ghost particles (for example in TomoPIV). The results are highly spatially-resolved data with very fast processing times. A detailed explanation of the technique as well as plots, movies, and experimental considerations will be discussed.
Pillow, Jonathan W; Ahmadian, Yashar; Paninski, Liam
2011-01-01
One of the central problems in systems neuroscience is to understand how neural spike trains convey sensory information. Decoding methods, which provide an explicit means for reading out the information contained in neural spike responses, offer a powerful set of tools for studying the neural coding problem. Here we develop several decoding methods based on point-process neural encoding models, or forward models that predict spike responses to stimuli. These models have concave log-likelihood functions, which allow efficient maximum-likelihood model fitting and stimulus decoding. We present several applications of the encoding model framework to the problem of decoding stimulus information from population spike responses: (1) a tractable algorithm for computing the maximum a posteriori (MAP) estimate of the stimulus, the most probable stimulus to have generated an observed single- or multiple-neuron spike train response, given some prior distribution over the stimulus; (2) a gaussian approximation to the posterior stimulus distribution that can be used to quantify the fidelity with which various stimulus features are encoded; (3) an efficient method for estimating the mutual information between the stimulus and the spike trains emitted by a neural population; and (4) a framework for the detection of change-point times (the time at which the stimulus undergoes a change in mean or variance) by marginalizing over the posterior stimulus distribution. We provide several examples illustrating the performance of these estimators with simulated and real neural data.
Multiple Echo Diffusion Tensor Acquisition Technique (MEDITATE) on a 3T clinical scanner
Baete, Steven H.; Cho, Gene; Sigmund, Eric E.
2013-01-01
This paper describes the concepts and implementation of an MRI method, Multiple Echo Diffusion Tensor Acquisition Technique (MEDITATE), which is capable of acquiring apparent diffusion tensor maps in two scans on a 3T clinical scanner. In each MEDITATE scan, a set of RF-pulses generates multiple echoes whose amplitudes are diffusion-weighted in both magnitude and direction by a pattern of diffusion gradients. As a result, two scans acquired with different diffusion weighting strengths suffice for accurate estimation of diffusion tensor imaging (DTI)-parameters. The MEDITATE variation presented here expands previous MEDITATE approaches to adapt to the clinical scanner platform, such as exploiting longitudinal magnetization storage to reduce T2-weighting. Fully segmented multi-shot Cartesian encoding is used for image encoding. MEDITATE was tested on isotropic (agar gel), anisotropic diffusion phantoms (asparagus), and in vivo skeletal muscle in healthy volunteers with cardiac-gating. Comparisons of accuracy were performed with standard twice-refocused spin echo (TRSE) DTI in each case and good quantitative agreement was found between diffusion eigenvalues, mean diffusivity, and fractional anisotropy derived from TRSE-DTI and from the MEDITATE sequence. Orientation patterns were correctly reproduced in both isotropic and anisotropic phantoms, and approximately so for in vivo imaging. This illustrates that the MEDITATE method of compressed diffusion encoding is feasible on the clinical scanner platform. With future development and employment of appropriate view-sharing image encoding this technique may be used in clinical applications requiring time-sensitive acquisition of DTI parameters such as dynamical DTI in muscle. PMID:23828606
Chu, Mei-Lan; Chang, Hing-Chiu; Chung, Hsiao-Wen; Truong, Trong-Kha; Bashir, Mustafa R.; Chen, Nan-kuei
2014-01-01
Purpose A projection onto convex sets reconstruction of multiplexed sensitivity encoded MRI (POCSMUSE) is developed to reduce motion-related artifacts, including respiration artifacts in abdominal imaging and aliasing artifacts in interleaved diffusion weighted imaging (DWI). Theory Images with reduced artifacts are reconstructed with an iterative POCS procedure that uses the coil sensitivity profile as a constraint. This method can be applied to data obtained with different pulse sequences and k-space trajectories. In addition, various constraints can be incorporated to stabilize the reconstruction of ill-conditioned matrices. Methods The POCSMUSE technique was applied to abdominal fast spin-echo imaging data, and its effectiveness in respiratory-triggered scans was evaluated. The POCSMUSE method was also applied to reduce aliasing artifacts due to shot-to-shot phase variations in interleaved DWI data corresponding to different k-space trajectories and matrix condition numbers. Results Experimental results show that the POCSMUSE technique can effectively reduce motion-related artifacts in data obtained with different pulse sequences, k-space trajectories and contrasts. Conclusion POCSMUSE is a general post-processing algorithm for reduction of motion-related artifacts. It is compatible with different pulse sequences, and can also be used to further reduce residual artifacts in data produced by existing motion artifact reduction methods. PMID:25394325
A new detection scheme for ultrafast 2D J-resolved spectroscopy
NASA Astrophysics Data System (ADS)
Giraudeau, Patrick; Akoka, Serge
2007-06-01
Recent ultrafast techniques enable 2D NMR spectra to be obtained in a single scan. A modification of the detection scheme involved in this technique is proposed, permitting the achievement of 2D 1H J-resolved spectra in 500 ms. The detection gradient echoes are substituted by spin echoes to obtain spectra where the coupling constants are encoded along the direct ν2 domain. The use of this new J-resolved detection block after continuous phase-encoding excitation schemes is discussed in terms of resolution and sensitivity. J-resolved spectra obtained on cinnamic acid and 3-ethyl bromopropionate are presented, revealing the expected 2D J-patterns with coupling constants as small as 2 Hz.
Catching the engram: strategies to examine the memory trace.
Sakaguchi, Masanori; Hayashi, Yasunori
2012-09-21
Memories are stored within neuronal ensembles in the brain. Modern genetic techniques can be used to not only visualize specific neuronal ensembles that encode memories (e.g., fear, craving) but also to selectively manipulate those neurons. These techniques are now being expanded for the study of various types of memory. In this review, we will summarize the genetic methods used to visualize and manipulate neurons involved in the representation of memory engrams. The methods will help clarify how memory is encoded, stored and processed in the brain. Furthermore, these approaches may contribute to our understanding of the pathological mechanisms associated with human memory disorders and, ultimately, may aid the development of therapeutic strategies to ameliorate these diseases.
Sarkar, Sankho Turjo; Bhondekar, Amol P; Macaš, Martin; Kumar, Ritesh; Kaur, Rishemjit; Sharma, Anupma; Gulati, Ashu; Kumar, Amod
2015-11-01
The paper presents a novel encoding scheme for neuronal code generation for odour recognition using an electronic nose (EN). This scheme is based on channel encoding using multiple Gaussian receptive fields superimposed over the temporal EN responses. The encoded data is further applied to a spiking neural network (SNN) for pattern classification. Two forms of SNN, a back-propagation based SpikeProp and a dynamic evolving SNN are used to learn the encoded responses. The effects of information encoding on the performance of SNNs have been investigated. Statistical tests have been performed to determine the contribution of the SNN and the encoding scheme to overall odour discrimination. The approach has been implemented in odour classification of orthodox black tea (Kangra-Himachal Pradesh Region) thereby demonstrating a biomimetic approach for EN data analysis. Copyright © 2015 Elsevier Ltd. All rights reserved.
MPEG-4 ASP SoC receiver with novel image enhancement techniques for DAB networks
NASA Astrophysics Data System (ADS)
Barreto, D.; Quintana, A.; García, L.; Callicó, G. M.; Núñez, A.
2007-05-01
This paper presents a system for real-time video reception in low-power mobile devices using Digital Audio Broadcast (DAB) technology for transmission. A demo receiver terminal is designed into a FPGA platform using the Advanced Simple Profile (ASP) MPEG-4 standard for video decoding. In order to keep the demanding DAB requirements, the bandwidth of the encoded sequence must be drastically reduced. In this sense, prior to the MPEG-4 coding stage, a pre-processing stage is performed. It is firstly composed by a segmentation phase according to motion and texture based on the Principal Component Analysis (PCA) of the input video sequence, and secondly by a down-sampling phase, which depends on the segmentation results. As a result of the segmentation task, a set of texture and motion maps are obtained. These motion and texture maps are also included into the bit-stream as user data side-information and are therefore known to the receiver. For all bit-rates, the whole encoder/decoder system proposed in this paper exhibits higher image visual quality than the alternative encoding/decoding method, assuming equal image sizes. A complete analysis of both techniques has also been performed to provide the optimum motion and texture maps for the global system, which has been finally validated for a variety of video sequences. Additionally, an optimal HW/SW partition for the MPEG-4 decoder has been studied and implemented over a Programmable Logic Device with an embedded ARM9 processor. Simulation results show that a throughput of 15 QCIF frames per second can be achieved with low area and low power implementation.
Size Constancy in Bat Biosonar? Perceptual Interaction of Object Aperture and Distance
Heinrich, Melina; Wiegrebe, Lutz
2013-01-01
Perception and encoding of object size is an important feature of sensory systems. In the visual system object size is encoded by the visual angle (visual aperture) on the retina, but the aperture depends on the distance of the object. As object distance is not unambiguously encoded in the visual system, higher computational mechanisms are needed. This phenomenon is termed “size constancy”. It is assumed to reflect an automatic re-scaling of visual aperture with perceived object distance. Recently, it was found that in echolocating bats, the ‘sonar aperture’, i.e., the range of angles from which sound is reflected from an object back to the bat, is unambiguously perceived and neurally encoded. Moreover, it is well known that object distance is accurately perceived and explicitly encoded in bat sonar. Here, we addressed size constancy in bat biosonar, recruiting virtual-object techniques. Bats of the species Phyllostomus discolor learned to discriminate two simple virtual objects that only differed in sonar aperture. Upon successful discrimination, test trials were randomly interspersed using virtual objects that differed in both aperture and distance. It was tested whether the bats spontaneously assigned absolute width information to these objects by combining distance and aperture. The results showed that while the isolated perceptual cues encoding object width, aperture, and distance were all perceptually well resolved by the bats, the animals did not assign absolute width information to the test objects. This lack of sonar size constancy may result from the bats relying on different modalities to extract size information at different distances. Alternatively, it is conceivable that familiarity with a behaviorally relevant, conspicuous object is required for sonar size constancy, as it has been argued for visual size constancy. Based on the current data, it appears that size constancy is not necessarily an essential feature of sonar perception in bats. PMID:23630598
Size constancy in bat biosonar? Perceptual interaction of object aperture and distance.
Heinrich, Melina; Wiegrebe, Lutz
2013-01-01
Perception and encoding of object size is an important feature of sensory systems. In the visual system object size is encoded by the visual angle (visual aperture) on the retina, but the aperture depends on the distance of the object. As object distance is not unambiguously encoded in the visual system, higher computational mechanisms are needed. This phenomenon is termed "size constancy". It is assumed to reflect an automatic re-scaling of visual aperture with perceived object distance. Recently, it was found that in echolocating bats, the 'sonar aperture', i.e., the range of angles from which sound is reflected from an object back to the bat, is unambiguously perceived and neurally encoded. Moreover, it is well known that object distance is accurately perceived and explicitly encoded in bat sonar. Here, we addressed size constancy in bat biosonar, recruiting virtual-object techniques. Bats of the species Phyllostomus discolor learned to discriminate two simple virtual objects that only differed in sonar aperture. Upon successful discrimination, test trials were randomly interspersed using virtual objects that differed in both aperture and distance. It was tested whether the bats spontaneously assigned absolute width information to these objects by combining distance and aperture. The results showed that while the isolated perceptual cues encoding object width, aperture, and distance were all perceptually well resolved by the bats, the animals did not assign absolute width information to the test objects. This lack of sonar size constancy may result from the bats relying on different modalities to extract size information at different distances. Alternatively, it is conceivable that familiarity with a behaviorally relevant, conspicuous object is required for sonar size constancy, as it has been argued for visual size constancy. Based on the current data, it appears that size constancy is not necessarily an essential feature of sonar perception in bats.
Study of the Gray Scale, Polychromatic, Distortion Invariant Neural Networks Using the Ipa Model.
NASA Astrophysics Data System (ADS)
Uang, Chii-Maw
Research in the optical neural network field is primarily motivated by the fact that humans recognize objects better than the conventional digital computers and the massively parallel inherent nature of optics. This research represents a continuous effort during the past several years in the exploitation of using neurocomputing for pattern recognition. Based on the interpattern association (IPA) model and Hamming net model, many new systems and applications are introduced. A gray level discrete associative memory that is based on object decomposition/composition is proposed for recognizing gray-level patterns. This technique extends the processing ability from the binary mode to gray-level mode, and thus the information capacity is increased. Two polychromatic optical neural networks using color liquid crystal television (LCTV) panels for color pattern recognition are introduced. By introducing a color encoding technique in conjunction with the interpattern associative algorithm, a color associative memory was realized. Based on the color decomposition and composition technique, a color exemplar-based Hamming net was built for color image classification. A shift-invariant neural network is presented through use of the translation invariant property of the modulus of the Fourier transformation and the hetero-associative interpattern association (IPA) memory. To extract the main features, a quadrantal sampling method is used to sampled data and then replace the training patterns. Using the concept of hetero-associative memory to recall the distorted object. A shift and rotation invariant neural network using an interpattern hetero-association (IHA) model is presented. To preserve the shift and rotation invariant properties, a set of binarized-encoded circular harmonic expansion (CHE) functions at the Fourier domain is used as the training set. We use the shift and symmetric properties of the modulus of the Fourier spectrum to avoid the problem of centering the CHE functions. Almost all neural networks have the positive and negative weights, which increases the difficulty of optical implementation. A method to construct a unipolar IPA IWM is discussed. By searching the redundant interconnection links, an effective way that removes all negative links is discussed.
Video bandwidth compression system
NASA Astrophysics Data System (ADS)
Ludington, D.
1980-08-01
The objective of this program was the development of a Video Bandwidth Compression brassboard model for use by the Air Force Avionics Laboratory, Wright-Patterson Air Force Base, in evaluation of bandwidth compression techniques for use in tactical weapons and to aid in the selection of particular operational modes to be implemented in an advanced flyable model. The bandwidth compression system is partitioned into two major divisions: the encoder, which processes the input video with a compression algorithm and transmits the most significant information; and the decoder where the compressed data is reconstructed into a video image for display.
Nanostructures Enabled by On-Wire Lithography (OWL)
Braunschweig, Adam B.; Schmucker, Abrin L.; Wei, Wei David; Mirkin, Chad A.
2010-01-01
Nanostructures fabricated by a novel technique, termed On-Wire-Lithography (OWL), can be combined with organic and biological molecules to create systems with emergent and highly functional properties. OWL is a template-based, electrochemical process for forming gapped cylindrical structures on a solid support, with feature sizes (both gap and segment length) that can be controlled on the sub-100 nm length scale. Structures prepared by this method have provided valuable insight into the plasmonic properties of noble metal nanomaterials and have formed the basis for novel molecular electronic, encoding, and biological detection devices. PMID:20396668
Classification of user interfaces for graph-based online analytical processing
NASA Astrophysics Data System (ADS)
Michaelis, James R.
2016-05-01
In the domain of business intelligence, user-oriented software for conducting multidimensional analysis via Online- Analytical Processing (OLAP) is now commonplace. In this setting, datasets commonly have well-defined sets of dimensions and measures around which analysis tasks can be conducted. However, many forms of data used in intelligence operations - deriving from social networks, online communications, and text corpora - will consist of graphs with varying forms of potential dimensional structure. Hence, enabling OLAP over such data collections requires explicit definition and extraction of supporting dimensions and measures. Further, as Graph OLAP remains an emerging technique, limited research has been done on its user interface requirements. Namely, on effective pairing of interface designs to different types of graph-derived dimensions and measures. This paper presents a novel technique for pairing of user interface designs to Graph OLAP datasets, rooted in Analytic Hierarchy Process (AHP) driven comparisons. Attributes of the classification strategy are encoded through an AHP ontology, developed in our alternate work and extended to support pairwise comparison of interfaces. Specifically, according to their ability, as perceived by Subject Matter Experts, to support dimensions and measures corresponding to Graph OLAP dataset attributes. To frame this discussion, a survey is provided both on existing variations of Graph OLAP, as well as existing interface designs previously applied in multidimensional analysis settings. Following this, a review of our AHP ontology is provided, along with a listing of corresponding dataset and interface attributes applicable toward SME recommendation structuring. A walkthrough of AHP-based recommendation encoding via the ontology-based approach is then provided. The paper concludes with a short summary of proposed future directions seen as essential for this research area.
Signal-to-noise ratio comparison of encoding methods for hyperpolarized noble gas MRI
NASA Technical Reports Server (NTRS)
Zhao, L.; Venkatesh, A. K.; Albert, M. S.; Panych, L. P.
2001-01-01
Some non-Fourier encoding methods such as wavelet and direct encoding use spatially localized bases. The spatial localization feature of these methods enables optimized encoding for improved spatial and temporal resolution during dynamically adaptive MR imaging. These spatially localized bases, however, have inherently reduced image signal-to-noise ratio compared with Fourier or Hadamad encoding for proton imaging. Hyperpolarized noble gases, on the other hand, have quite different MR properties compared to proton, primarily the nonrenewability of the signal. It could be expected, therefore, that the characteristics of image SNR with respect to encoding method will also be very different from hyperpolarized noble gas MRI compared to proton MRI. In this article, hyperpolarized noble gas image SNRs of different encoding methods are compared theoretically using a matrix description of the encoding process. It is shown that image SNR for hyperpolarized noble gas imaging is maximized for any orthonormal encoding method. Methods are then proposed for designing RF pulses to achieve normalized encoding profiles using Fourier, Hadamard, wavelet, and direct encoding methods for hyperpolarized noble gases. Theoretical results are confirmed with hyperpolarized noble gas MRI experiments. Copyright 2001 Academic Press.
Stern, C E; Corkin, S; González, R G; Guimaraes, A R; Baker, J R; Jennings, P J; Carr, C A; Sugiura, R M; Vedantham, V; Rosen, B R
1996-01-01
Considerable evidence exists to support the hypothesis that the hippocampus and related medial temporal lobe structures are crucial for the encoding and storage of information in long-term memory. Few human imaging studies, however, have successfully shown signal intensity changes in these areas during encoding or retrieval. Using functional magnetic resonance imaging (fMRI), we studied normal human subjects while they performed a novel picture encoding task. High-speed echo-planar imaging techniques evaluated fMRI signal changes throughout the brain. During the encoding of novel pictures, statistically significant increases in fMRI signal were observed bilaterally in the posterior hippocampal formation and parahippocampal gyrus and in the lingual and fusiform gyri. To our knowledge, this experiment is the first fMRI study to show robust signal changes in the human hippocampal region. It also provides evidence that the encoding of novel, complex pictures depends upon an interaction between ventral cortical regions, specialized for object vision, and the hippocampal formation and parahippocampal gyrus, specialized for long-term memory. Images Fig. 1 Fig. 3 PMID:8710927
Encoding-related brain activity during deep processing of verbal materials: a PET study.
Fujii, Toshikatsu; Okuda, Jiro; Tsukiura, Takashi; Ohtake, Hiroya; Suzuki, Maki; Kawashima, Ryuta; Itoh, Masatoshi; Fukuda, Hiroshi; Yamadori, Atsushi
2002-12-01
The recent advent of neuroimaging techniques provides an opportunity to examine brain regions related to a specific memory process such as episodic memory encoding. There is, however, a possibility that areas active during an assumed episodic memory encoding task, compared with a control task, involve not only areas directly relevant to episodic memory encoding processes but also areas associated with other cognitive processes for on-line information. We used positron emission tomography (PET) to differentiate these two kinds of regions. Normal volunteers were engaged in deep (semantic) or shallow (phonological) processing of new or repeated words during PET. Results showed that deep processing, compared with shallow processing, resulted in significantly better recognition performance and that this effect was associated with activation of various brain areas. Further analyses revealed that there were regions directly relevant to episodic memory encoding in the anterior part of the parahippocampal gyrus, inferior frontal gyrus, supramarginal gyrus, anterior cingulate gyrus, and medial frontal lobe in the left hemisphere. Our results demonstrated that several regions, including the medial temporal lobe, play a role in episodic memory encoding.
Encoding and Retrieval Interference in Sentence Comprehension: Evidence from Agreement
Villata, Sandra; Tabor, Whitney; Franck, Julie
2018-01-01
Long-distance verb-argument dependencies generally require the integration of a fronted argument when the verb is encountered for sentence interpretation. Under a parsing model that handles long-distance dependencies through a cue-based retrieval mechanism, retrieval is hampered when retrieval cues also resonate with non-target elements (retrieval interference). However, similarity-based interference may also stem from interference arising during the encoding of elements in memory (encoding interference), an effect that is not directly accountable for by a cue-based retrieval mechanism. Although encoding and retrieval interference are clearly distinct at the theoretical level, it is difficult to disentangle the two on empirical grounds, since encoding interference may also manifest at the retrieval region. We report two self-paced reading experiments aimed at teasing apart the role of each component in gender and number subject-verb agreement in Italian and English object relative clauses. In Italian, the verb does not agree in gender with the subject, thus providing no cue for retrieval. In English, although present tense verbs agree in number with the subject, past tense verbs do not, allowing us to test the role of number as a retrieval cue within the same language. Results from both experiments converge, showing similarity-based interference at encoding, and some evidence for an effect at retrieval. After having pointed out the non-negligible role of encoding in sentence comprehension, and noting that Lewis and Vasishth’s (2005) ACT-R model of sentence processing, the most fully developed cue-based retrieval approach to sentence processing does not predict encoding effects, we propose an augmentation of this model that predicts these effects. We then also propose a self-organizing sentence processing model (SOSP), which has the advantage of accounting for retrieval and encoding interference with a single mechanism. PMID:29403414
Optimized distortion correction technique for echo planar imaging.
Chen , N K; Wyrwicz, A M
2001-03-01
A new phase-shifted EPI pulse sequence is described that encodes EPI phase errors due to all off-resonance factors, including B(o) field inhomogeneity, eddy current effects, and gradient waveform imperfections. Combined with the previously proposed multichannel modulation postprocessing algorithm (Chen and Wyrwicz, MRM 1999;41:1206-1213), the encoded phase error information can be used to effectively remove geometric distortions in subsequent EPI scans. The proposed EPI distortion correction technique has been shown to be effective in removing distortions due to gradient waveform imperfections and phase gradient-induced eddy current effects. In addition, this new method retains advantages of the earlier method, such as simultaneous correction of different off-resonance factors without use of a complicated phase unwrapping procedure. The effectiveness of this technique is illustrated with EPI studies on phantoms and animal subjects. Implementation to different versions of EPI sequences is also described. Magn Reson Med 45:525-528, 2001. Copyright 2001 Wiley-Liss, Inc.
Authentication of gold nanoparticle encoded pharmaceutical tablets using polarimetric signatures.
Carnicer, Artur; Arteaga, Oriol; Suñé-Negre, Josep M; Javidi, Bahram
2016-10-01
The counterfeiting of pharmaceutical products represents concerns for both industry and the safety of the general public. Falsification produces losses to companies and poses health risks for patients. In order to detect fake pharmaceutical tablets, we propose producing film-coated tablets with gold nanoparticle encoding. These coated tablets contain unique polarimetric signatures. We present experiments to show that ellipsometric optical techniques, in combination with machine learning algorithms, can be used to distinguish genuine and fake samples. To the best of our knowledge, this is the first report using gold nanoparticles encoded with optical polarimetric classifiers to prevent the counterfeiting of pharmaceutical products.
Digital spiral-slit for bi-photon imaging
NASA Astrophysics Data System (ADS)
McLaren, Melanie; Forbes, Andrew
2017-04-01
Quantum ghost imaging using entangled photon pairs has become a popular field of investigation, highlighting the quantum correlation between the photon pairs. We introduce a technique using spatial light modulators encoded with digital holograms to recover both the amplitude and the phase of the digital object. Down-converted photon pairs are entangled in the orbital angular momentum basis, and are commonly measured using spiral phase holograms. Consequently, by encoding a spiral ring-slit hologram into the idler arm, and varying it radially we can simultaneously recover the phase and amplitude of the object in question. We demonstrate that a good correlation between the encoded field function and the reconstructed images exists.
Using "Pseudomonas Putida xylE" Gene to Teach Molecular Cloning Techniques for Undergraduates
ERIC Educational Resources Information Center
Dong, Xu; Xin, Yi; Ye, Li; Ma, Yufang
2009-01-01
We have developed and implemented a serial experiment in molecular cloning laboratory course for undergraduate students majored in biotechnology. "Pseudomonas putida xylE" gene, encoding catechol 2, 3-dioxygenase, was manipulated to learn molecular biology techniques. The integration of cloning, expression, and enzyme assay gave students…
JPEG2000 encoding with perceptual distortion control.
Liu, Zhen; Karam, Lina J; Watson, Andrew B
2006-07-01
In this paper, a new encoding approach is proposed to control the JPEG2000 encoding in order to reach a desired perceptual quality. The new method is based on a vision model that incorporates various masking effects of human visual perception and a perceptual distortion metric that takes spatial and spectral summation of individual quantization errors into account. Compared with the conventional rate-based distortion minimization JPEG2000 encoding, the new method provides a way to generate consistent quality images at a lower bit rate.
Extracting TSK-type Neuro-Fuzzy model using the Hunting search algorithm
NASA Astrophysics Data System (ADS)
Bouzaida, Sana; Sakly, Anis; M'Sahli, Faouzi
2014-01-01
This paper proposes a Takagi-Sugeno-Kang (TSK) type Neuro-Fuzzy model tuned by a novel metaheuristic optimization algorithm called Hunting Search (HuS). The HuS algorithm is derived based on a model of group hunting of animals such as lions, wolves, and dolphins when looking for a prey. In this study, the structure and parameters of the fuzzy model are encoded into a particle. Thus, the optimal structure and parameters are achieved simultaneously. The proposed method was demonstrated through modeling and control problems, and the results have been compared with other optimization techniques. The comparisons indicate that the proposed method represents a powerful search approach and an effective optimization technique as it can extract the accurate TSK fuzzy model with an appropriate number of rules.
Multiplexed SNP genotyping using the Qbead™ system: a quantum dot-encoded microsphere-based assay
Xu, Hongxia; Sha, Michael Y.; Wong, Edith Y.; Uphoff, Janet; Xu, Yanzhang; Treadway, Joseph A.; Truong, Anh; O’Brien, Eamonn; Asquith, Steven; Stubbins, Michael; Spurr, Nigel K.; Lai, Eric H.; Mahoney, Walt
2003-01-01
We have developed a new method using the Qbead™ system for high-throughput genotyping of single nucleotide polymorphisms (SNPs). The Qbead system employs fluorescent Qdot™ semiconductor nanocrystals, also known as quantum dots, to encode microspheres that subsequently can be used as a platform for multiplexed assays. By combining mixtures of quantum dots with distinct emission wavelengths and intensities, unique spectral ‘barcodes’ are created that enable the high levels of multiplexing required for complex genetic analyses. Here, we applied the Qbead system to SNP genotyping by encoding microspheres conjugated to allele-specific oligonucleotides. After hybridization of oligonucleotides to amplicons produced by multiplexed PCR of genomic DNA, individual microspheres are analyzed by flow cytometry and each SNP is distinguished by its unique spectral barcode. Using 10 model SNPs, we validated the Qbead system as an accurate and reliable technique for multiplexed SNP genotyping. By modifying the types of probes conjugated to microspheres, the Qbead system can easily be adapted to other assay chemistries for SNP genotyping as well as to other applications such as analysis of gene expression and protein–protein interactions. With its capability for high-throughput automation, the Qbead system has the potential to be a robust and cost-effective platform for a number of applications. PMID:12682378
A Bayesian Model for Highly Accelerated Phase-Contrast MRI
Rich, Adam; Potter, Lee C.; Jin, Ning; Ash, Joshua; Simonetti, Orlando P.; Ahmad, Rizwan
2015-01-01
Purpose Phase-contrast magnetic resonance imaging (PC-MRI) is a noninvasive tool to assess cardiovascular disease by quantifying blood flow; however, low data acquisition efficiency limits the spatial and temporal resolutions, real-time application, and extensions to 4D flow imaging in clinical settings. We propose a new data processing approach called Reconstructing Velocity Encoded MRI with Approximate message passing aLgorithms (ReVEAL) that accelerates the acquisition by exploiting data structure unique to PC-MRI. Theory and Methods ReVEAL models physical correlations across space, time, and velocity encodings. The proposed Bayesian approach exploits the relationships in both magnitude and phase among velocity encodings. A fast iterative recovery algorithm is introduced based on message passing. For validation, prospectively undersampled data are processed from a pulsatile flow phantom and five healthy volunteers. Results ReVEAL is in good agreement, quantified by peak velocity and stroke volume (SV), with reference data for acceleration rates R ≤ 10. For SV, Pearson r ≥ 0.996 for phantom imaging (n = 24) and r ≥ 0.956 for prospectively accelerated in vivo imaging (n = 10) for R ≤ 10. Conclusion ReVEAL enables accurate quantification of blood flow from highly undersampled data. The technique is extensible to 4D flow imaging, where higher acceleration may be possible due to additional redundancy. PMID:26444911
Arduino Due based tool to facilitate in vivo two-photon excitation microscopy
Artoni, Pietro; Landi, Silvia; Sato, Sebastian Sulis; Luin, Stefano; Ratto, Gian Michele
2016-01-01
Two-photon excitation spectroscopy is a powerful technique for the characterization of the optical properties of genetically encoded and synthetic fluorescent molecules. Excitation spectroscopy requires tuning the wavelength of the Ti:sapphire laser while carefully monitoring the delivered power. To assist laser tuning and the control of delivered power, we developed an Arduino Due based tool for the automatic acquisition of high quality spectra. This tool is portable, fast, affordable and precise. It allowed studying the impact of scattering and of blood absorption on two-photon excitation light. In this way, we determined the wavelength-dependent deformation of excitation spectra occurring in deep tissues in vivo. PMID:27446677
Streamlined Genome Sequence Compression using Distributed Source Coding
Wang, Shuang; Jiang, Xiaoqian; Chen, Feng; Cui, Lijuan; Cheng, Samuel
2014-01-01
We aim at developing a streamlined genome sequence compression algorithm to support alternative miniaturized sequencing devices, which have limited communication, storage, and computation power. Existing techniques that require heavy client (encoder side) cannot be applied. To tackle this challenge, we carefully examined distributed source coding theory and developed a customized reference-based genome compression protocol to meet the low-complexity need at the client side. Based on the variation between source and reference, our protocol will pick adaptively either syndrome coding or hash coding to compress subsequences of changing code length. Our experimental results showed promising performance of the proposed method when compared with the state-of-the-art algorithm (GRS). PMID:25520552
Method and apparatus for optical communication by frequency modulation
Priatko, Gordon J.
1988-01-01
Laser optical communication according to this invention is carried out by producing multi-frequency laser beams having different frequencies, splitting one or more of these constituent beams into reference and signal beams, encoding information on the signal beams by frequency modulation and detecting the encoded information by heterodyne techniques. Much more information can be transmitted over optical paths according to the present invention than with the use of only one path as done previously.
Local alignment of two-base encoded DNA sequence
Homer, Nils; Merriman, Barry; Nelson, Stanley F
2009-01-01
Background DNA sequence comparison is based on optimal local alignment of two sequences using a similarity score. However, some new DNA sequencing technologies do not directly measure the base sequence, but rather an encoded form, such as the two-base encoding considered here. In order to compare such data to a reference sequence, the data must be decoded into sequence. The decoding is deterministic, but the possibility of measurement errors requires searching among all possible error modes and resulting alignments to achieve an optimal balance of fewer errors versus greater sequence similarity. Results We present an extension of the standard dynamic programming method for local alignment, which simultaneously decodes the data and performs the alignment, maximizing a similarity score based on a weighted combination of errors and edits, and allowing an affine gap penalty. We also present simulations that demonstrate the performance characteristics of our two base encoded alignment method and contrast those with standard DNA sequence alignment under the same conditions. Conclusion The new local alignment algorithm for two-base encoded data has substantial power to properly detect and correct measurement errors while identifying underlying sequence variants, and facilitating genome re-sequencing efforts based on this form of sequence data. PMID:19508732
A Survey of Artificial Immune System Based Intrusion Detection
Li, Tao; Hu, Xinlei; Wang, Feng; Zou, Yang
2014-01-01
In the area of computer security, Intrusion Detection (ID) is a mechanism that attempts to discover abnormal access to computers by analyzing various interactions. There is a lot of literature about ID, but this study only surveys the approaches based on Artificial Immune System (AIS). The use of AIS in ID is an appealing concept in current techniques. This paper summarizes AIS based ID methods from a new view point; moreover, a framework is proposed for the design of AIS based ID Systems (IDSs). This framework is analyzed and discussed based on three core aspects: antibody/antigen encoding, generation algorithm, and evolution mode. Then we collate the commonly used algorithms, their implementation characteristics, and the development of IDSs into this framework. Finally, some of the future challenges in this area are also highlighted. PMID:24790549
Wang, Jian-Feng; Liu, Hong-Lin; Zhang, Shu-Qin; Yu, Xiang-Dong; Sun, Zhong-Zhou; Jin, Shang-Zhong; Zhang, Zai-Xuan
2013-04-01
Basic principles, development trends and applications status of distributed optical fiber Raman temperature sensor (DTS) are introduced. Performance parameters of DTS system include the sensing optical fiber length, temperature measurement uncertainty, spatial resolution and measurement time. These parameters have a certain correlation and it is difficult to improve them at the same time by single technology. So a variety of key techniques such as Raman amplification, pulse coding technique, Raman related dual-wavelength self-correction technique and embedding optical switching technique are researched to improve the performance of the DTS system. A 1 467 nm continuous laser is used as pump laser and the light source of DTS system (1 550 nm pulse laser) is amplified. When the length of sensing optical fiber is 50 km the Raman gain is about 17 dB. Raman gain can partially compensate the transmission loss of optical fiber, so that the sensing length can reach 50 km. In DTS system using pulse coding technique, pulse laser is coded by 211 bits loop encoder and correlation calculation is used to demodulate temperature. The encoded laser signal is related, whereas the noise is not relevant. So that signal-to-noise ratio (SNR) of DTS system can be improved significantly. The experiments are carried out in DTS system with single mode optical fiber and multimode optical fiber respectively. Temperature measurement uncertainty can all reach 1 degrees C. In DTS system using Raman related dual-wavelength self-correction technique, the wavelength difference of the two light sources must be one Raman frequency shift in optical fiber. For example, wavelength of the main laser is 1 550 nm and wavelength of the second laser must be 1 450 nm. Spatial resolution of DTS system is improved to 2 m by using dual-wavelength self-correction technique. Optical switch is embedded in DTS system, so that the temperature measurement channel multiply extended and the total length of the sensing optical fiber effectively extended. Optical fiber sensor network is composed.
Real-time demonstration hardware for enhanced DPCM video compression algorithm
NASA Technical Reports Server (NTRS)
Bizon, Thomas P.; Whyte, Wayne A., Jr.; Marcopoli, Vincent R.
1992-01-01
The lack of available wideband digital links as well as the complexity of implementation of bandwidth efficient digital video CODECs (encoder/decoder) has worked to keep the cost of digital television transmission too high to compete with analog methods. Terrestrial and satellite video service providers, however, are now recognizing the potential gains that digital video compression offers and are proposing to incorporate compression systems to increase the number of available program channels. NASA is similarly recognizing the benefits of and trend toward digital video compression techniques for transmission of high quality video from space and therefore, has developed a digital television bandwidth compression algorithm to process standard National Television Systems Committee (NTSC) composite color television signals. The algorithm is based on differential pulse code modulation (DPCM), but additionally utilizes a non-adaptive predictor, non-uniform quantizer and multilevel Huffman coder to reduce the data rate substantially below that achievable with straight DPCM. The non-adaptive predictor and multilevel Huffman coder combine to set this technique apart from other DPCM encoding algorithms. All processing is done on a intra-field basis to prevent motion degradation and minimize hardware complexity. Computer simulations have shown the algorithm will produce broadcast quality reconstructed video at an average transmission rate of 1.8 bits/pixel. Hardware implementation of the DPCM circuit, non-adaptive predictor and non-uniform quantizer has been completed, providing realtime demonstration of the image quality at full video rates. Video sampling/reconstruction circuits have also been constructed to accomplish the analog video processing necessary for the real-time demonstration. Performance results for the completed hardware compare favorably with simulation results. Hardware implementation of the multilevel Huffman encoder/decoder is currently under development along with implementation of a buffer control algorithm to accommodate the variable data rate output of the multilevel Huffman encoder. A video CODEC of this type could be used to compress NTSC color television signals where high quality reconstruction is desirable (e.g., Space Station video transmission, transmission direct-to-the-home via direct broadcast satellite systems or cable television distribution to system headends and direct-to-the-home).
Compression of transmission bandwidth requirements for a certain class of band-limited functions.
NASA Technical Reports Server (NTRS)
Smith, I. R.; Schilling, D. L.
1972-01-01
A study of source-encoding techniques that afford a reduction of data-transmission rates is made with particular emphasis on the compression of transmission bandwidth requirements of band-limited functions. The feasibility of bandwidth compression through analog signal rooting is investigated. It is found that the N-th roots of elements of a certain class of entire functions of exponential type possess contour integrals resembling Fourier transforms, the Cauchy principal values of which are compactly supported on an interval one N-th the size of that of the original function. Exploring this theoretical result, it is found that synthetic roots can be generated, which closely approximate the N-th roots of a certain class of band-limited signals and possess spectra that are essentially confined to a bandwidth one N-th that of the signal subjected to the rooting operation. A source-encoding algorithm based on this principle is developed that allows the compression of data-transmission requirements for a certain class of band-limited signals.
Zhou, Zhi; Arce, Gonzalo R; Di Crescenzo, Giovanni
2006-08-01
Visual cryptography encodes a secret binary image (SI) into n shares of random binary patterns. If the shares are xeroxed onto transparencies, the secret image can be visually decoded by superimposing a qualified subset of transparencies, but no secret information can be obtained from the superposition of a forbidden subset. The binary patterns of the n shares, however, have no visual meaning and hinder the objectives of visual cryptography. Extended visual cryptography [1] was proposed recently to construct meaningful binary images as shares using hypergraph colourings, but the visual quality is poor. In this paper, a novel technique named halftone visual cryptography is proposed to achieve visual cryptography via halftoning. Based on the blue-noise dithering principles, the proposed method utilizes the void and cluster algorithm [2] to encode a secret binary image into n halftone shares (images) carrying significant visual information. The simulation shows that the visual quality of the obtained halftone shares are observably better than that attained by any available visual cryptography method known to date.
Yan, Y; Xu, W; Chen, H; Ma, Z; Zhu, Y; Cai, S
1994-01-01
The partial structure gene encoding ES antigen derived from Trichinella spiralis (TSP) muscle larvae was cloned, characterized, and expressed in E. coli. The target DNA (0.7 kb) was directly obtained from the TSP total RNA by using RNA PCR technique. Based on the analysis with the RE digestion, the fragment was cloned into the fusion expression vector pEX31C. It was shown that a kind of 37kDa fusion protein was expressed in E. coli containing the recombinant plasmid by SDS-PAGE electrophoresis. The expressed protein was over 22% of the total cell protein, and it was aggregated in the form of inclusion bodies in E. coli. The purified protein could be recognized in ELISA both by sera from swine-infected with TSP and by the monoclonal antibody against TSP. These findings suggest that the recombinant protein is a potentially valuable antigen both for immunodiagnosis and vaccine development of trichinellosis.
A CMOS Imager with Focal Plane Compression using Predictive Coding
NASA Technical Reports Server (NTRS)
Leon-Salas, Walter D.; Balkir, Sina; Sayood, Khalid; Schemm, Nathan; Hoffman, Michael W.
2007-01-01
This paper presents a CMOS image sensor with focal-plane compression. The design has a column-level architecture and it is based on predictive coding techniques for image decorrelation. The prediction operations are performed in the analog domain to avoid quantization noise and to decrease the area complexity of the circuit, The prediction residuals are quantized and encoded by a joint quantizer/coder circuit. To save area resources, the joint quantizerlcoder circuit exploits common circuitry between a single-slope analog-to-digital converter (ADC) and a Golomb-Rice entropy coder. This combination of ADC and encoder allows the integration of the entropy coder at the column level. A prototype chip was fabricated in a 0.35 pm CMOS process. The output of the chip is a compressed bit stream. The test chip occupies a silicon area of 2.60 mm x 5.96 mm which includes an 80 X 44 APS array. Tests of the fabricated chip demonstrate the validity of the design.
Enhancing Security of Double Random Phase Encoding Based on Random S-Box
NASA Astrophysics Data System (ADS)
Girija, R.; Singh, Hukum
2018-06-01
In this paper, we propose a novel asymmetric cryptosystem for double random phase encoding (DRPE) using random S-Box. While utilising S-Box separately is not reliable and DRPE does not support non-linearity, so, our system unites the effectiveness of S-Box with an asymmetric system of DRPE (through Fourier transform). The uniqueness of proposed cryptosystem lies on employing high sensitivity dynamic S-Box for our DRPE system. The randomness and scalability achieved due to applied technique is an additional feature of the proposed solution. The firmness of random S-Box is investigated in terms of performance parameters such as non-linearity, strict avalanche criterion, bit independence criterion, linear and differential approximation probabilities etc. S-Boxes convey nonlinearity to cryptosystems which is a significant parameter and very essential for DRPE. The strength of proposed cryptosystem has been analysed using various parameters such as MSE, PSNR, correlation coefficient analysis, noise analysis, SVD analysis, etc. Experimental results are conferred in detail to exhibit proposed cryptosystem is highly secure.
Inferring genome-wide interplay landscape between DNA methylation and transcriptional regulation.
Tang, Binhua; Wang, Xin
2015-01-01
DNA methylation and transcriptional regulation play important roles in cancer cell development and differentiation processes. Based on the currently available cell line profiling information from the ENCODE Consortium, we propose a Bayesian inference model to infer and construct genome-wide interaction landscape between DNA methylation and transcriptional regulation, which sheds light on the underlying complex functional mechanisms important within the human cancer and disease context. For the first time, we select all the currently available cell lines (>=20) and transcription factors (>=80) profiling information from the ENCODE Consortium portal. Through the integration of those genome-wide profiling sources, our genome-wide analysis detects multiple functional loci of interest, and indicates that DNA methylation is cell- and region-specific, due to the interplay mechanisms with transcription regulatory activities. We validate our analysis results with the corresponding RNA-sequencing technique for those detected genomic loci. Our results provide novel and meaningful insights for the interplay mechanisms of transcriptional regulation and gene expression for the human cancer and disease studies.
Catching the engram: strategies to examine the memory trace
2012-01-01
Memories are stored within neuronal ensembles in the brain. Modern genetic techniques can be used to not only visualize specific neuronal ensembles that encode memories (e.g., fear, craving) but also to selectively manipulate those neurons. These techniques are now being expanded for the study of various types of memory. In this review, we will summarize the genetic methods used to visualize and manipulate neurons involved in the representation of memory engrams. The methods will help clarify how memory is encoded, stored and processed in the brain. Furthermore, these approaches may contribute to our understanding of the pathological mechanisms associated with human memory disorders and, ultimately, may aid the development of therapeutic strategies to ameliorate these diseases. PMID:22999350
Robust 3D DFT video watermarking
NASA Astrophysics Data System (ADS)
Deguillaume, Frederic; Csurka, Gabriela; O'Ruanaidh, Joseph J.; Pun, Thierry
1999-04-01
This paper proposes a new approach for digital watermarking and secure copyright protection of videos, the principal aim being to discourage illicit copying and distribution of copyrighted material. The method presented here is based on the discrete Fourier transform (DFT) of three dimensional chunks of video scene, in contrast with previous works on video watermarking where each video frame was marked separately, or where only intra-frame or motion compensation parameters were marked in MPEG compressed videos. Two kinds of information are hidden in the video: a watermark and a template. Both are encoded using an owner key to ensure the system security and are embedded in the 3D DFT magnitude of video chunks. The watermark is a copyright information encoded in the form of a spread spectrum signal. The template is a key based grid and is used to detect and invert the effect of frame-rate changes, aspect-ratio modification and rescaling of frames. The template search and matching is performed in the log-log-log map of the 3D DFT magnitude. The performance of the presented technique is evaluated experimentally and compared with a frame-by-frame 2D DFT watermarking approach.
Lossy to lossless object-based coding of 3-D MRI data.
Menegaz, Gloria; Thiran, Jean-Philippe
2002-01-01
We propose a fully three-dimensional (3-D) object-based coding system exploiting the diagnostic relevance of the different regions of the volumetric data for rate allocation. The data are first decorrelated via a 3-D discrete wavelet transform. The implementation via the lifting steps scheme allows to map integer-to-integer values, enabling lossless coding, and facilitates the definition of the object-based inverse transform. The coding process assigns disjoint segments of the bitstream to the different objects, which can be independently accessed and reconstructed at any up-to-lossless quality. Two fully 3-D coding strategies are considered: embedded zerotree coding (EZW-3D) and multidimensional layered zero coding (MLZC), both generalized for region of interest (ROI)-based processing. In order to avoid artifacts along region boundaries, some extra coefficients must be encoded for each object. This gives rise to an overheading of the bitstream with respect to the case where the volume is encoded as a whole. The amount of such extra information depends on both the filter length and the decomposition depth. The system is characterized on a set of head magnetic resonance images. Results show that MLZC and EZW-3D have competitive performances. In particular, the best MLZC mode outperforms the others state-of-the-art techniques on one of the datasets for which results are available in the literature.
Computational intelligence techniques for biological data mining: An overview
NASA Astrophysics Data System (ADS)
Faye, Ibrahima; Iqbal, Muhammad Javed; Said, Abas Md; Samir, Brahim Belhaouari
2014-10-01
Computational techniques have been successfully utilized for a highly accurate analysis and modeling of multifaceted and raw biological data gathered from various genome sequencing projects. These techniques are proving much more effective to overcome the limitations of the traditional in-vitro experiments on the constantly increasing sequence data. However, most critical problems that caught the attention of the researchers may include, but not limited to these: accurate structure and function prediction of unknown proteins, protein subcellular localization prediction, finding protein-protein interactions, protein fold recognition, analysis of microarray gene expression data, etc. To solve these problems, various classification and clustering techniques using machine learning have been extensively used in the published literature. These techniques include neural network algorithms, genetic algorithms, fuzzy ARTMAP, K-Means, K-NN, SVM, Rough set classifiers, decision tree and HMM based algorithms. Major difficulties in applying the above algorithms include the limitations found in the previous feature encoding and selection methods while extracting the best features, increasing classification accuracy and decreasing the running time overheads of the learning algorithms. The application of this research would be potentially useful in the drug design and in the diagnosis of some diseases. This paper presents a concise overview of the well-known protein classification techniques.
Recollection-Based Retrieval Is Influenced by Contextual Variation at Encoding but Not at Retrieval
Rosenstreich, Eyal; Goshen-Gottstein, Yonatan
2015-01-01
In this article, we investigated the effects of variations at encoding and retrieval on recollection. We argue that recollection is more likely to be affected by the processing that information undergoes at encoding than at retrieval. To date, manipulations shown to affect recollection were typically carried out at encoding. Therefore, an open question is whether these same manipulations would also affect recollection when carried out at retrieval, or whether there is an inherent connection between their effects on recollection and the encoding stage. We therefore manipulated, at either encoding or retrieval, fluency of processing (Experiment 1)—typically found not to affect recollection—and the amount of attentional resources available for processing (Experiments 2 and 3)—typically reported to affect recollection. We found that regardless of the type of manipulation, recollection was affected more by manipulations carried out at encoding and was essentially unaffected when these manipulations were carried out at retrieval. These findings suggest an inherent dependency between recollection-based retrieval and the encoding stage. It seems that because recollection is a contextual-based retrieval process, it is determined by the processing information undergoes at encoding—at the time when context is bound with the items—but not at retrieval—when context is only recovered. PMID:26135583
Sensitivity quantification of remote detection NMR and MRI
NASA Astrophysics Data System (ADS)
Granwehr, J.; Seeley, J. A.
2006-04-01
A sensitivity analysis is presented of the remote detection NMR technique, which facilitates the spatial separation of encoding and detection of spin magnetization. Three different cases are considered: remote detection of a transient signal that must be encoded point-by-point like a free induction decay, remote detection of an experiment where the transient dimension is reduced to one data point like phase encoding in an imaging experiment, and time-of-flight (TOF) flow visualization. For all cases, the sensitivity enhancement is proportional to the relative sensitivity between the remote detector and the circuit that is used for encoding. It is shown for the case of an encoded transient signal that the sensitivity does not scale unfavorably with the number of encoded points compared to direct detection. Remote enhancement scales as the square root of the ratio of corresponding relaxation times in the two detection environments. Thus, remote detection especially increases the sensitivity of imaging experiments of porous materials with large susceptibility gradients, which cause a rapid dephasing of transverse spin magnetization. Finally, TOF remote detection, in which the detection volume is smaller than the encoded fluid volume, allows partial images corresponding to different time intervals between encoding and detection to be recorded. These partial images, which contain information about the fluid displacement, can be recorded, in an ideal case, with the same sensitivity as the full image detected in a single step with a larger coil.
NASA Astrophysics Data System (ADS)
Diamond, D. H.; Heyns, P. S.; Oberholster, A. J.
2016-12-01
The measurement of instantaneous angular speed is being increasingly investigated for its use in a wide range of condition monitoring and prognostic applications. Central to many measurement techniques are incremental shaft encoders recording the arrival times of shaft angular increments. The conventional approach to processing these signals assumes that the angular increments are equidistant. This assumption is generally incorrect when working with toothed wheels and especially zebra tape encoders and has been shown to introduce errors in the estimated shaft speed. There are some proposed methods in the literature that aim to compensate for this geometric irregularity. Some of the methods require the shaft speed to be perfectly constant for calibration, something rarely achieved in practice. Other methods assume the shaft speed to be nearly constant with minor deviations. Therefore existing methods cannot calibrate the entire shaft encoder geometry for arbitrary shaft speeds. The present article presents a method to calculate the shaft encoder geometry for arbitrary shaft speed profiles. The method uses Bayesian linear regression to calculate the encoder increment distances. The method is derived and then tested against simulated and laboratory experiments. The results indicate that the proposed method is capable of accurately determining the shaft encoder geometry for any shaft speed profile.
Semantic Modelling of Digital Forensic Evidence
NASA Astrophysics Data System (ADS)
Kahvedžić, Damir; Kechadi, Tahar
The reporting of digital investigation results are traditionally carried out in prose and in a large investigation may require successive communication of findings between different parties. Popular forensic suites aid in the reporting process by storing provenance and positional data but do not automatically encode why the evidence is considered important. In this paper we introduce an evidence management methodology to encode the semantic information of evidence. A structured vocabulary of terms, ontology, is used to model the results in a logical and predefined manner. The descriptions are application independent and automatically organised. The encoded descriptions aim to help the investigation in the task of report writing and evidence communication and can be used in addition to existing evidence management techniques.
Novel selection methods for DNA-encoded chemical libraries
Chan, Alix I.; McGregor, Lynn M.; Liu, David R.
2015-01-01
Driven by the need for new compounds to serve as biological probes and leads for therapeutic development and the growing accessibility of DNA technologies including high-throughput sequencing, many academic and industrial groups have begun to use DNA-encoded chemical libraries as a source of bioactive small molecules. In this review, we describe the technologies that have enabled the selection of compounds with desired activities from these libraries. These methods exploit the sensitivity of in vitro selection coupled with DNA amplification to overcome some of the limitations and costs associated with conventional screening methods. In addition, we highlight newer techniques with the potential to be applied to the high-throughput evaluation of DNA-encoded chemical libraries. PMID:25723146
ERIC Educational Resources Information Center
Evans, Ian M.
2010-01-01
Affective priming is a technique used in experimental psychology to investigate the organization of emotional schemata not fully available to conscious awareness. The presentation of stimuli (the prime) with strong positive emotional valence alters the accessibility of positive stimuli within the individual's emotionally encoded cognitive system.…
Jean, Julie; Blais, Burton; Darveau, André; Fliss, Ismaïl
2001-01-01
A nucleic acid sequence-based amplification (NASBA) technique for the detection of hepatitis A virus (HAV) in foods was developed and compared to the traditional reverse transcription (RT)-PCR technique. Oligonucleotide primers targeting the VP1 and VP2 genes encoding the major HAV capsid proteins were used for the amplification of viral RNA in an isothermal process resulting in the accumulation of RNA amplicons. Amplicons were detected by hybridization with a digoxigenin-labeled oligonucleotide probe in a dot blot assay format. Using the NASBA, as little as 0.4 ng of target RNA/ml was detected per comparison to 4 ng/ml for RT-PCR. When crude HAV viral lysate was used, a detection limit of 2 PFU (4 × 102 PFU/ml) was obtained with NASBA, compared to 50 PFU (1 × 104 PFU/ml) obtained with RT-PCR. No interference was encountered in the amplification of HAV RNA in the presence of excess nontarget RNA or DNA. The NASBA system successfully detected HAV recovered from experimentally inoculated samples of waste water, lettuce, and blueberries. Compared to RT-PCR and other amplification techniques, the NASBA system offers several advantages in terms of sensitivity, rapidity, and simplicity. This technique should be readily adaptable for detection of other RNA viruses in both foods and clinical samples. PMID:11722911
Jean, J; Blais, B; Darveau, A; Fliss, I
2001-12-01
A nucleic acid sequence-based amplification (NASBA) technique for the detection of hepatitis A virus (HAV) in foods was developed and compared to the traditional reverse transcription (RT)-PCR technique. Oligonucleotide primers targeting the VP1 and VP2 genes encoding the major HAV capsid proteins were used for the amplification of viral RNA in an isothermal process resulting in the accumulation of RNA amplicons. Amplicons were detected by hybridization with a digoxigenin-labeled oligonucleotide probe in a dot blot assay format. Using the NASBA, as little as 0.4 ng of target RNA/ml was detected per comparison to 4 ng/ml for RT-PCR. When crude HAV viral lysate was used, a detection limit of 2 PFU (4 x 10(2) PFU/ml) was obtained with NASBA, compared to 50 PFU (1 x 10(4) PFU/ml) obtained with RT-PCR. No interference was encountered in the amplification of HAV RNA in the presence of excess nontarget RNA or DNA. The NASBA system successfully detected HAV recovered from experimentally inoculated samples of waste water, lettuce, and blueberries. Compared to RT-PCR and other amplification techniques, the NASBA system offers several advantages in terms of sensitivity, rapidity, and simplicity. This technique should be readily adaptable for detection of other RNA viruses in both foods and clinical samples.
Jang, Mooseok; Ruan, Haowen; Judkewitz, Benjamin; Yang, Changhuei
2014-01-01
The time-reversed ultrasonically encoded (TRUE) optical focusing technique is a method that is capable of focusing light deep within a scattering medium. This theoretical study aims to explore the depth limits of the TRUE technique for biological tissues in the context of two primary constraints – the safety limit of the incident light fluence and a limited TRUE’s recording time (assumed to be 1 ms), as dynamic scatterer movements in a living sample can break the time-reversal scattering symmetry. Our numerical simulation indicates that TRUE has the potential to render an optical focus with a peak-to-background ratio of ~2 at a depth of ~103 mm at wavelength of 800 nm in a phantom with tissue scattering characteristics. This study sheds light on the allocation of photon budget in each step of the TRUE technique, the impact of low signal on the phase measurement error, and the eventual impact of the phase measurement error on the strength of the TRUE optical focus. PMID:24663917
Multiscale 3-D shape representation and segmentation using spherical wavelets.
Nain, Delphine; Haker, Steven; Bobick, Aaron; Tannenbaum, Allen
2007-04-01
This paper presents a novel multiscale shape representation and segmentation algorithm based on the spherical wavelet transform. This work is motivated by the need to compactly and accurately encode variations at multiple scales in the shape representation in order to drive the segmentation and shape analysis of deep brain structures, such as the caudate nucleus or the hippocampus. Our proposed shape representation can be optimized to compactly encode shape variations in a population at the needed scale and spatial locations, enabling the construction of more descriptive, nonglobal, nonuniform shape probability priors to be included in the segmentation and shape analysis framework. In particular, this representation addresses the shortcomings of techniques that learn a global shape prior at a single scale of analysis and cannot represent fine, local variations in a population of shapes in the presence of a limited dataset. Specifically, our technique defines a multiscale parametric model of surfaces belonging to the same population using a compact set of spherical wavelets targeted to that population. We further refine the shape representation by separating into groups wavelet coefficients that describe independent global and/or local biological variations in the population, using spectral graph partitioning. We then learn a prior probability distribution induced over each group to explicitly encode these variations at different scales and spatial locations. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior for segmentation. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to two different brain structures, the caudate nucleus and the hippocampus, of interest in the study of schizophrenia. We show: 1) a reconstruction task of a test set to validate the expressiveness of our multiscale prior and 2) a segmentation task. In the reconstruction task, our results show that for a given training set size, our algorithm significantly improves the approximation of shapes in a testing set over the Point Distribution Model, which tends to oversmooth data. In the segmentation task, our validation shows our algorithm is computationally efficient and outperforms the Active Shape Model algorithm, by capturing finer shape details.
Multiscale 3-D Shape Representation and Segmentation Using Spherical Wavelets
Nain, Delphine; Haker, Steven; Bobick, Aaron
2013-01-01
This paper presents a novel multiscale shape representation and segmentation algorithm based on the spherical wavelet transform. This work is motivated by the need to compactly and accurately encode variations at multiple scales in the shape representation in order to drive the segmentation and shape analysis of deep brain structures, such as the caudate nucleus or the hippocampus. Our proposed shape representation can be optimized to compactly encode shape variations in a population at the needed scale and spatial locations, enabling the construction of more descriptive, nonglobal, nonuniform shape probability priors to be included in the segmentation and shape analysis framework. In particular, this representation addresses the shortcomings of techniques that learn a global shape prior at a single scale of analysis and cannot represent fine, local variations in a population of shapes in the presence of a limited dataset. Specifically, our technique defines a multiscale parametric model of surfaces belonging to the same population using a compact set of spherical wavelets targeted to that population. We further refine the shape representation by separating into groups wavelet coefficients that describe independent global and/or local biological variations in the population, using spectral graph partitioning. We then learn a prior probability distribution induced over each group to explicitly encode these variations at different scales and spatial locations. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior for segmentation. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to two different brain structures, the caudate nucleus and the hippocampus, of interest in the study of schizophrenia. We show: 1) a reconstruction task of a test set to validate the expressiveness of our multiscale prior and 2) a segmentation task. In the reconstruction task, our results show that for a given training set size, our algorithm significantly improves the approximation of shapes in a testing set over the Point Distribution Model, which tends to oversmooth data. In the segmentation task, our validation shows our algorithm is computationally efficient and outperforms the Active Shape Model algorithm, by capturing finer shape details. PMID:17427745
Coon, Keith D; Valla, Jon; Szelinger, Szabolics; Schneider, Lonnie E; Niedzielko, Tracy L; Brown, Kevin M; Pearson, John V; Halperin, Rebecca; Dunckley, Travis; Papassotiropoulos, Andreas; Caselli, Richard J; Reiman, Eric M; Stephan, Dietrich A
2006-08-01
The role of mitochondrial dysfunction in the pathogenesis of Alzheimer's disease (AD) has been well documented. Though evidence for the role of mitochondria in AD seems incontrovertible, the impact of mitochondrial DNA (mtDNA) mutations in AD etiology remains controversial. Though mutations in mitochondrially encoded genes have repeatedly been implicated in the pathogenesis of AD, many of these studies have been plagued by lack of replication as well as potential contamination of nuclear-encoded mitochondrial pseudogenes. To assess the role of mtDNA mutations in the pathogenesis of AD, while avoiding the pitfalls of nuclear-encoded mitochondrial pseudogenes encountered in previous investigations and showcasing the benefits of a novel resequencing technology, we sequenced the entire coding region (15,452 bp) of mtDNA from 19 extremely well-characterized AD patients and 18 age-matched, unaffected controls utilizing a new, reliable, high-throughput array-based resequencing technique, the Human MitoChip. High-throughput, array-based DNA resequencing of the entire mtDNA coding region from platelets of 37 subjects revealed the presence of 208 loci displaying a total of 917 sequence variants. There were no statistically significant differences in overall mutational burden between cases and controls, however, 265 independent sites of statistically significant change between cases and controls were identified. Changed sites were found in genes associated with complexes I (30.2%), III (3.0%), IV (33.2%), and V (9.1%) as well as tRNA (10.6%) and rRNA (14.0%). Despite their statistical significance, the subtle nature of the observed changes makes it difficult to determine whether they represent true functional variants involved in AD etiology or merely naturally occurring dissimilarity. Regardless, this study demonstrates the tremendous value of this novel mtDNA resequencing platform, which avoids the pitfalls of erroneously amplifying nuclear-encoded mtDNA pseudogenes, and our proposed analysis paradigm, which utilizes the availability of raw signal intensity values for each of the four potential alleles to facilitate quantitative estimates of mtDNA heteroplasmy. This information provides a potential new target for burgeoning diagnostics and therapeutics that could truly assist those suffering from this devastating disorder.
PNA-encoded chemical libraries.
Zambaldo, Claudio; Barluenga, Sofia; Winssinger, Nicolas
2015-06-01
Peptide nucleic acid (PNA)-encoded chemical libraries along with DNA-encoded libraries have provided a powerful new paradigm for library synthesis and ligand discovery. PNA-encoding stands out for its compatibility with standard solid phase synthesis and the technology has been used to prepare libraries of peptides, heterocycles and glycoconjugates. Different screening formats have now been reported including selection-based and microarray-based methods that have yielded specific ligands against diverse target classes including membrane receptors, lectins and challenging targets such as Hsp70. Copyright © 2015 Elsevier Ltd. All rights reserved.
Huynh, Duong L; Tripathy, Srimant P; Bedell, Harold E; Ögmen, Haluk
2015-01-01
Human memory is content addressable-i.e., contents of the memory can be accessed using partial information about the bound features of a stored item. In this study, we used a cross-feature cuing technique to examine how the human visual system encodes, binds, and retains information about multiple stimulus features within a set of moving objects. We sought to characterize the roles of three different features (position, color, and direction of motion, the latter two of which are processed preferentially within the ventral and dorsal visual streams, respectively) in the construction and maintenance of object representations. We investigated the extent to which these features are bound together across the following processing stages: during stimulus encoding, sensory (iconic) memory, and visual short-term memory. Whereas all features examined here can serve as cues for addressing content, their effectiveness shows asymmetries and varies according to cue-report pairings and the stage of information processing and storage. Position-based indexing theories predict that position should be more effective as a cue compared to other features. While we found a privileged role for position as a cue at the stimulus-encoding stage, position was not the privileged cue at the sensory and visual short-term memory stages. Instead, the pattern that emerged from our findings is one that mirrors the parallel processing streams in the visual system. This stream-specific binding and cuing effectiveness manifests itself in all three stages of information processing examined here. Finally, we find that the Leaky Flask model proposed in our previous study is applicable to all three features.
Qu, Xiaojun; Jin, Haojun; Liu, Yuqian; Sun, Qingjiang
2018-03-06
The combination of microbead array, isothermal amplification, and molecular signaling enables the continuous development of next-generation molecular diagnostic techniques. Herein we reported the implementation of nicking endonuclease-assisted strand displacement amplification reaction on quantum dots-encoded microbead (Qbead), and demonstrated its feasibility for multiplexed miRNA assay in real sample. The Qbead featured with well-defined core-shell superstructure with dual-colored quantum dots loaded in silica core and shell, respectively, exhibiting remarkably high optical encoding stability. Specially designed stem-loop-structured probes were immobilized onto the Qbead for specific target recognition and amplification. In the presence of low abundance of miRNA target, the target triggered exponential amplification, producing a large quantity of stem-G-quadruplexes, which could be selectively signaled by a fluorescent G-quadruplex intercalator. In one-step operation, the Qbead-based isothermal amplification and signaling generated emissive "core-shell-satellite" superstructure, changing the Qbead emission-color. The target abundance-dependent emission-color changes of the Qbead allowed direct, visual detection of specific miRNA target. This visualization method achieved limit of detection at the subfemtomolar level with a linear dynamic range of 4.5 logs, and point-mutation discrimination capability for precise miRNA analyses. The array of three encoded Qbeads could simultaneously quantify three miRNA biomarkers in ∼500 human hepatoma carcinoma cells. With the advancements in ease of operation, multiplexing, and visualization capabilities, the isothermal amplification-on-Qbead assay could potentially enable the development of point-of-care diagnostics.
LSHSIM: A Locality Sensitive Hashing based method for multiple-point geostatistics
NASA Astrophysics Data System (ADS)
Moura, Pedro; Laber, Eduardo; Lopes, Hélio; Mesejo, Daniel; Pavanelli, Lucas; Jardim, João; Thiesen, Francisco; Pujol, Gabriel
2017-10-01
Reservoir modeling is a very important task that permits the representation of a geological region of interest, so as to generate a considerable number of possible scenarios. Since its inception, many methodologies have been proposed and, in the last two decades, multiple-point geostatistics (MPS) has been the dominant one. This methodology is strongly based on the concept of training image (TI) and the use of its characteristics, which are called patterns. In this paper, we propose a new MPS method that combines the application of a technique called Locality Sensitive Hashing (LSH), which permits to accelerate the search for patterns similar to a target one, with a Run-Length Encoding (RLE) compression technique that speeds up the calculation of the Hamming similarity. Experiments with both categorical and continuous images show that LSHSIM is computationally efficient and produce good quality realizations. In particular, for categorical data, the results suggest that LSHSIM is faster than MS-CCSIM, one of the state-of-the-art methods.
Visually Lossless Data Compression for Real-Time Frame/Pushbroom Space Science Imagers
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu; Venbrux, Jack; Bhatia, Prakash; Miller, Warner H.
2000-01-01
A visually lossless data compression technique is currently being developed for space science applications under the requirement of high-speed push-broom scanning. The technique is also applicable to frame based imaging and is error-resilient in that error propagation is contained within a few scan lines. The algorithm is based on a block transform of a hybrid of modulated lapped transform (MLT) and discrete cosine transform (DCT), or a 2-dimensional lapped transform, followed by bit-plane encoding; this combination results in an embedded bit string with exactly the desirable compression rate as desired by the user. The approach requires no unique table to maximize its performance. The compression scheme performs well on a suite of test images typical of images from spacecraft instruments. Flight qualified hardware implementations are in development; a functional chip set is expected by the end of 2001. The chip set is being designed to compress data in excess of 20 Msamples/sec and support quantizations from 2 to 16 bits.
Image/text automatic indexing and retrieval system using context vector approach
NASA Astrophysics Data System (ADS)
Qing, Kent P.; Caid, William R.; Ren, Clara Z.; McCabe, Patrick
1995-11-01
Thousands of documents and images are generated daily both on and off line on the information superhighway and other media. Storage technology has improved rapidly to handle these data but indexing this information is becoming very costly. HNC Software Inc. has developed a technology for automatic indexing and retrieval of free text and images. This technique is demonstrated and is based on the concept of `context vectors' which encode a succinct representation of the associated text and features of sub-image. In this paper, we will describe the Automated Librarian System which was designed for free text indexing and the Image Content Addressable Retrieval System (ICARS) which extends the technique from the text domain into the image domain. Both systems have the ability to automatically assign indices for a new document and/or image based on the content similarities in the database. ICARS also has the capability to retrieve images based on similarity of content using index terms, text description, and user-generated images as a query without performing segmentation or object recognition.
Functional metagenomics reveals novel β-galactosidases not predictable from gene sequences.
Cheng, Jiujun; Romantsov, Tatyana; Engel, Katja; Doxey, Andrew C; Rose, David R; Neufeld, Josh D; Charles, Trevor C
2017-01-01
The techniques of metagenomics have allowed researchers to access the genomic potential of uncultivated microbes, but there remain significant barriers to determination of gene function based on DNA sequence alone. Functional metagenomics, in which DNA is cloned and expressed in surrogate hosts, can overcome these barriers, and make important contributions to the discovery of novel enzymes. In this study, a soil metagenomic library carried in an IncP cosmid was used for functional complementation for β-galactosidase activity in both Sinorhizobium meliloti (α-Proteobacteria) and Escherichia coli (γ-Proteobacteria) backgrounds. One β-galactosidase, encoded by six overlapping clones that were selected in both hosts, was identified as a member of glycoside hydrolase family 2. We could not identify ORFs obviously encoding possible β-galactosidases in 19 other sequenced clones that were only able to complement S. meliloti. Based on low sequence identity to other known glycoside hydrolases, yet not β-galactosidases, three of these ORFs were examined further. Biochemical analysis confirmed that all three encoded β-galactosidase activity. Lac36W_ORF11 and Lac161_ORF7 had conserved domains, but lacked similarities to known glycoside hydrolases. Lac161_ORF10 had neither conserved domains nor similarity to known glycoside hydrolases. Bioinformatic and structural modeling implied that Lac161_ORF10 protein represented a novel enzyme family with a five-bladed propeller glycoside hydrolase domain. By discovering founding members of three novel β-galactosidase families, we have reinforced the value of functional metagenomics for isolating novel genes that could not have been predicted from DNA sequence analysis alone.
Everaert, Jonas; Koster, Ernst H W
2015-10-01
Emotional biases in attention modulate encoding of emotional material into long-term memory, but little is known about the role of such attentional biases during emotional memory retrieval. The present study investigated how emotional biases in memory are related to attentional allocation during retrieval. Forty-nine individuals encoded emotionally positive and negative meanings derived from ambiguous information and then searched their memory for encoded meanings in response to a set of retrieval cues. The remember/know/new procedure was used to classify memories as recollection-based or familiarity-based, and gaze behavior was monitored throughout the task to measure attentional allocation. We found that a bias in sustained attention during recollection-based, but not familiarity-based, retrieval predicted subsequent memory bias toward positive versus negative material following encoding. Thus, during emotional memory retrieval, attention affects controlled forms of retrieval (i.e., recollection) but does not modulate relatively automatic, familiarity-based retrieval. These findings enhance understanding of how distinct components of attention regulate the emotional content of memories. Implications for theoretical models and emotion regulation are discussed. (c) 2015 APA, all rights reserved).
Cates, Joshua W.; Bieniosek, Matthew F.; Levin, Craig S.
2017-01-01
Abstract. Maintaining excellent timing resolution in the generation of silicon photomultiplier (SiPM)-based time-of-flight positron emission tomography (TOF-PET) systems requires a large number of high-speed, high-bandwidth electronic channels and components. To minimize the cost and complexity of a system’s back-end architecture and data acquisition, many analog signals are often multiplexed to fewer channels using techniques that encode timing, energy, and position information. With progress in the development SiPMs having lower dark noise, after pulsing, and cross talk along with higher photodetection efficiency, a coincidence timing resolution (CTR) well below 200 ps FWHM is now easily achievable in single pixel, bench-top setups using 20-mm length, lutetium-based inorganic scintillators. However, multiplexing the output of many SiPMs to a single channel will significantly degrade CTR without appropriate signal processing. We test the performance of a PET detector readout concept that multiplexes 16 SiPMs to two channels. One channel provides timing information with fast comparators, and the second channel encodes both position and energy information in a time-over-threshold-based pulse sequence. This multiplexing readout concept was constructed with discrete components to process signals from a 4×4 array of SensL MicroFC-30035 SiPMs coupled to 2.9×2.9×20 mm3 Lu1.8Gd0.2SiO5 (LGSO):Ce (0.025 mol. %) scintillators. This readout method yielded a calibrated, global energy resolution of 15.3% FWHM at 511 keV with a CTR of 198±2 ps FWHM between the 16-pixel multiplexed detector array and a 2.9×2.9×20 mm3 LGSO-SiPM reference detector. In summary, results indicate this multiplexing scheme is a scalable readout technique that provides excellent coincidence timing performance. PMID:28382312
An Island Grouping Genetic Algorithm for Fuzzy Partitioning Problems
Salcedo-Sanz, S.; Del Ser, J.; Geem, Z. W.
2014-01-01
This paper presents a novel fuzzy clustering technique based on grouping genetic algorithms (GGAs), which are a class of evolutionary algorithms especially modified to tackle grouping problems. Our approach hinges on a GGA devised for fuzzy clustering by means of a novel encoding of individuals (containing elements and clusters sections), a new fitness function (a superior modification of the Davies Bouldin index), specially tailored crossover and mutation operators, and the use of a scheme based on a local search and a parallelization process, inspired from an island-based model of evolution. The overall performance of our approach has been assessed over a number of synthetic and real fuzzy clustering problems with different objective functions and distance measures, from which it is concluded that the proposed approach shows excellent performance in all cases. PMID:24977235
NASA Astrophysics Data System (ADS)
Passas, Georgios; Freear, Steven; Fawcett, Darren
2010-08-01
Orthogonal frequency division multiplexing (OFDM)-based feed-forward space-time trellis code (FFSTTC) encoders can be synthesised as very high speed integrated circuit hardware description language (VHDL) designs. Evaluation of their FPGA implementation can lead to conclusions that help a designer to decide the optimum implementation, given the encoder structural parameters. VLSI architectures based on 1-bit multipliers and look-up tables (LUTs) are compared in terms of FPGA slices and block RAMs (area), as well as in terms of minimum clock period (speed). Area and speed graphs versus encoder memory order are provided for quadrature phase shift keying (QPSK) and 8 phase shift keying (8-PSK) modulation and two transmit antennas, revealing best implementation under these conditions. The effect of number of modulation bits and transmit antennas on the encoder implementation complexity is also investigated.
Large protein as a potential target for use in rabies diagnostics.
Santos Katz, I S; Dias, M H; Lima, I F; Chaves, L B; Ribeiro, O G; Scheffer, K C; Iwai, L K
Rabies is a zoonotic viral disease that remains a serious threat to public health worldwide. The rabies lyssavirus (RABV) genome encodes five structural proteins, multifunctional and significant for pathogenicity. The large protein (L) presents well-conserved genomic regions, which may be a good alternative to generate informative datasets for development of new methods for rabies diagnosis. This paper describes the development of a technique for the identification of L protein in several RABV strains from different hosts, demonstrating that MS-based proteomics is a potential method for antigen identification and a good alternative for rabies diagnosis.
An expert system that performs a satellite station keepimg maneuver
NASA Technical Reports Server (NTRS)
Linesbrowning, M. Kate; Stone, John L., Jr.
1987-01-01
The development and characteristics of a prototype expert system, Expert System for Satellite Orbit Control (ESSOC), capable of providing real-time spacecraft system analysis and command generation for a geostationary satellite are described. The ESSOC recommends appropriate commands that reflect both the changing spacecraft condition and previous procedural action. An internal knowledge base stores satellite status information and is updated with processed spacecraft telemetry. Procedural structure data are encoded in production rules. Structural methods of knowledge acquisition and the design and performance-enhancing techniques that enable ESSOC to operate in real time are also considered.
Novel approaches for near and far field super-resolved imaging
NASA Astrophysics Data System (ADS)
Zalevsky, Zeev; Gur, Aviram; Aharoni, Ran; Kutchoukov, Vladimir G.; Garini, Yuval; Beiderman, Yevgeny; Micó, Vicente; García, Javier
2011-09-01
In this paper we start by presenting one recent development in the field of near field imaging where a lensless microscope is introduced. Its operation principle is based upon wavelength encoding of the spatial information through non periodic holes array and right after decoding the spatial information using a spectrometer. In the second part of the paper we demonstrate a remote super sensing technique allowing monitoring, from a distance, the glucose level in the blood stream of a patient by tracking the trajectory of secondary speckle patterns reflected from the skin of the wrist.
Image storage in coumarin-based copolymer thin films by photoinduced dimerization.
Gindre, Denis; Iliopoulos, Konstantinos; Krupka, Oksana; Champigny, Emilie; Morille, Yohann; Sallé, Marc
2013-11-15
We report a technique to encode grayscale digital images in thin films composed of copolymers containing coumarins. A nonlinear microscopy setup was implemented and two nonlinear optical processes were used to store and read information. A third-order process (two-photon absorption) was used to photoinduce a controlled dimer-to-monomer ratio within a defined tiny volume in the material, which corresponds to each recorded bit of data. Moreover, a second-order process (second-harmonic generation) was used to read the stored information, which has been found to be highly dependent upon the monomer-to-dimer ratio.
Sabooh, M Fazli; Iqbal, Nadeem; Khan, Mukhtaj; Khan, Muslim; Maqbool, H F
2018-05-01
This study examines accurate and efficient computational method for identification of 5-methylcytosine sites in RNA modification. The occurrence of 5-methylcytosine (m 5 C) plays a vital role in a number of biological processes. For better comprehension of the biological functions and mechanism it is necessary to recognize m 5 C sites in RNA precisely. The laboratory techniques and procedures are available to identify m 5 C sites in RNA, but these procedures require a lot of time and resources. This study develops a new computational method for extracting the features of RNA sequence. In this method, first the RNA sequence is encoded via composite feature vector, then, for the selection of discriminate features, the minimum-redundancy-maximum-relevance algorithm was used. Secondly, the classification method used has been based on a support vector machine by using jackknife cross validation test. The suggested method efficiently identifies m 5 C sites from non- m 5 C sites and the outcome of the suggested algorithm is 93.33% with sensitivity of 90.0 and specificity of 96.66 on bench mark datasets. The result exhibits that proposed algorithm shown significant identification performance compared to the existing computational techniques. This study extends the knowledge about the occurrence sites of RNA modification which paves the way for better comprehension of the biological uses and mechanism. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Jing; Cai, Cong-Bo; Chen, Lin; Chen, Ying; Qu, Xiao-Bo; Cai, Shu-Hui
2015-10-01
In many ultrafast imaging applications, the reduced field-of-view (rFOV) technique is often used to enhance the spatial resolution and field inhomogeneity immunity of the images. The stationary-phase characteristic of the spatiotemporally-encoded (SPEN) method offers an inherent applicability to rFOV imaging. In this study, a flexible rFOV imaging method is presented and the superiority of the SPEN approach in rFOV imaging is demonstrated. The proposed method is validated with phantom and in vivo rat experiments, including cardiac imaging and contrast-enhanced perfusion imaging. For comparison, the echo planar imaging (EPI) experiments with orthogonal RF excitation are also performed. The results show that the signal-to-noise ratios of the images acquired by the proposed method can be higher than those obtained with the rFOV EPI. Moreover, the proposed method shows better performance in the cardiac imaging and perfusion imaging of rat kidney, and it can scan one or more regions of interest (ROIs) with high spatial resolution in a single shot. It might be a favorable solution to ultrafast imaging applications in cases with severe susceptibility heterogeneities, such as cardiac imaging and perfusion imaging. Furthermore, it might be promising in applications with separate ROIs, such as mammary and limb imaging. Project supported by the National Natural Science Foundation of China (Grant Nos. 11474236, 81171331, and U1232212).
Single Photon Counting Large Format Imaging Sensors with High Spatial and Temporal Resolution
NASA Astrophysics Data System (ADS)
Siegmund, O. H. W.; Ertley, C.; Vallerga, J. V.; Cremer, T.; Craven, C. A.; Lyashenko, A.; Minot, M. J.
High time resolution astronomical and remote sensing applications have been addressed with microchannel plate based imaging, photon time tagging detector sealed tube schemes. These are being realized with the advent of cross strip readout techniques with high performance encoding electronics and atomic layer deposited (ALD) microchannel plate technologies. Sealed tube devices up to 20 cm square have now been successfully implemented with sub nanosecond timing and imaging. The objective is to provide sensors with large areas (25 cm2 to 400 cm2) with spatial resolutions of <20 μm FWHM and timing resolutions of <100 ps for dynamic imaging. New high efficiency photocathodes for the visible regime are discussed, which also allow response down below 150nm for UV sensing. Borosilicate MCPs are providing high performance, and when processed with ALD techniques are providing order of magnitude lifetime improvements and enhanced photocathode stability. New developments include UV/visible photocathodes, ALD MCPs, and high resolution cross strip anodes for 100 mm detectors. Tests with 50 mm format cross strip readouts suitable for Planacon devices show spatial resolutions better than 20 μm FWHM, with good image linearity while using low gain ( 106). Current cross strip encoding electronics can accommodate event rates of >5 MHz and event timing accuracy of 100 ps. High-performance ASIC versions of these electronics are in development with better event rate, power and mass suitable for spaceflight instruments.
NASA Astrophysics Data System (ADS)
Morikawa, Junko
2015-05-01
The mobile type apparatus for a quantitative micro-scale thermography using a micro-bolometer was developed based on our original techniques such as an achromatic lens design to capture a micro-scale image in long-wave infrared, a video signal superimposing for the real time emissivity correction, and a pseudo acceleration of a timeframe. The total size of the instrument was designed as it was put in the 17 cm x 28 cm x 26 cm size carrying box. The video signal synthesizer enabled to record a direct digital signal of monitoring temperature or positioning data. The encoded digital signal data embedded in each image was decoded to read out. The protocol to encode/decode the measured data was originally defined. The mixed signals of IR camera and the imposed data were applied to the pixel by pixel emissivity corrections and the pseudo-acceleration of the periodical thermal phenomena. Because the emissivity of industrial materials and biological tissues were usually inhomogeneous, it has the different temperature dependence on each pixel. The time-scale resolution for the periodic thermal event was improved with the algorithm for "pseudoacceleration". It contributes to reduce the noise by integrating the multiple image data, keeping a time resolution. The anisotropic thermal properties of some composite materials such as thermal insulating materials of cellular plastics and the biometric composite materials were analyzed using these techniques.
Multicore-based 3D-DWT video encoder
NASA Astrophysics Data System (ADS)
Galiano, Vicente; López-Granado, Otoniel; Malumbres, Manuel P.; Migallón, Hector
2013-12-01
Three-dimensional wavelet transform (3D-DWT) encoders are good candidates for applications like professional video editing, video surveillance, multi-spectral satellite imaging, etc. where a frame must be reconstructed as quickly as possible. In this paper, we present a new 3D-DWT video encoder based on a fast run-length coding engine. Furthermore, we present several multicore optimizations to speed-up the 3D-DWT computation. An exhaustive evaluation of the proposed encoder (3D-GOP-RL) has been performed, and we have compared the evaluation results with other video encoders in terms of rate/distortion (R/D), coding/decoding delay, and memory consumption. Results show that the proposed encoder obtains good R/D results for high-resolution video sequences with nearly in-place computation using only the memory needed to store a group of pictures. After applying the multicore optimization strategies over the 3D DWT, the proposed encoder is able to compress a full high-definition video sequence in real-time.
2014-10-06
The nanosheets, like many SERS platforms, are ideally suited for encoding schemes based on the SERS signal from a variety of thiolated small...counterfeiting purposes. The nanosheets, like many SERS platforms, are ideally suited for encoding schemes based on the SERS signal from a variety of...environments ( like the surface of human hair). 2. Nanoflares In 2007, we first introduced the concept of nanoflares. Nanoflares are a new class of
Predicting activity approach based on new atoms similarity kernel function.
Abu El-Atta, Ahmed H; Moussa, M I; Hassanien, Aboul Ella
2015-07-01
Drug design is a high cost and long term process. To reduce time and costs for drugs discoveries, new techniques are needed. Chemoinformatics field implements the informational techniques and computer science like machine learning and graph theory to discover the chemical compounds properties, such as toxicity or biological activity. This is done through analyzing their molecular structure (molecular graph). To overcome this problem there is an increasing need for algorithms to analyze and classify graph data to predict the activity of molecules. Kernels methods provide a powerful framework which combines machine learning with graph theory techniques. These kernels methods have led to impressive performance results in many several chemoinformatics problems like biological activity prediction. This paper presents a new approach based on kernel functions to solve activity prediction problem for chemical compounds. First we encode all atoms depending on their neighbors then we use these codes to find a relationship between those atoms each other. Then we use relation between different atoms to find similarity between chemical compounds. The proposed approach was compared with many other classification methods and the results show competitive accuracy with these methods. Copyright © 2015 Elsevier Inc. All rights reserved.
A Secret 3D Model Sharing Scheme with Reversible Data Hiding Based on Space Subdivision
NASA Astrophysics Data System (ADS)
Tsai, Yuan-Yu
2016-03-01
Secret sharing is a highly relevant research field, and its application to 2D images has been thoroughly studied. However, secret sharing schemes have not kept pace with the advances of 3D models. With the rapid development of 3D multimedia techniques, extending the application of secret sharing schemes to 3D models has become necessary. In this study, an innovative secret 3D model sharing scheme for point geometries based on space subdivision is proposed. Each point in the secret point geometry is first encoded into a series of integer values that fall within [0, p - 1], where p is a predefined prime number. The share values are derived by substituting the specified integer values for all coefficients of the sharing polynomial. The surface reconstruction and the sampling concepts are then integrated to derive a cover model with sufficient model complexity for each participant. Finally, each participant has a separate 3D stego model with embedded share values. Experimental results show that the proposed technique supports reversible data hiding and the share values have higher levels of privacy and improved robustness. This technique is simple and has proven to be a feasible secret 3D model sharing scheme.
The synaptic plasticity and memory hypothesis: encoding, storage and persistence
Takeuchi, Tomonori; Duszkiewicz, Adrian J.; Morris, Richard G. M.
2014-01-01
The synaptic plasticity and memory hypothesis asserts that activity-dependent synaptic plasticity is induced at appropriate synapses during memory formation and is both necessary and sufficient for the encoding and trace storage of the type of memory mediated by the brain area in which it is observed. Criteria for establishing the necessity and sufficiency of such plasticity in mediating trace storage have been identified and are here reviewed in relation to new work using some of the diverse techniques of contemporary neuroscience. Evidence derived using optical imaging, molecular-genetic and optogenetic techniques in conjunction with appropriate behavioural analyses continues to offer support for the idea that changing the strength of connections between neurons is one of the major mechanisms by which engrams are stored in the brain. PMID:24298167
Behavior Knowledge Space-Based Fusion for Copy-Move Forgery Detection.
Ferreira, Anselmo; Felipussi, Siovani C; Alfaro, Carlos; Fonseca, Pablo; Vargas-Munoz, John E; Dos Santos, Jefersson A; Rocha, Anderson
2016-07-20
The detection of copy-move image tampering is of paramount importance nowadays, mainly due to its potential use for misleading the opinion forming process of the general public. In this paper, we go beyond traditional forgery detectors and aim at combining different properties of copy-move detection approaches by modeling the problem on a multiscale behavior knowledge space, which encodes the output combinations of different techniques as a priori probabilities considering multiple scales of the training data. Afterwards, the conditional probabilities missing entries are properly estimated through generative models applied on the existing training data. Finally, we propose different techniques that exploit the multi-directionality of the data to generate the final outcome detection map in a machine learning decision-making fashion. Experimental results on complex datasets, comparing the proposed techniques with a gamut of copy-move detection approaches and other fusion methodologies in the literature show the effectiveness of the proposed method and its suitability for real-world applications.
Nonredundant sparse feature extraction using autoencoders with receptive fields clustering.
Ayinde, Babajide O; Zurada, Jacek M
2017-09-01
This paper proposes new techniques for data representation in the context of deep learning using agglomerative clustering. Existing autoencoder-based data representation techniques tend to produce a number of encoding and decoding receptive fields of layered autoencoders that are duplicative, thereby leading to extraction of similar features, thus resulting in filtering redundancy. We propose a way to address this problem and show that such redundancy can be eliminated. This yields smaller networks and produces unique receptive fields that extract distinct features. It is also shown that autoencoders with nonnegativity constraints on weights are capable of extracting fewer redundant features than conventional sparse autoencoders. The concept is illustrated using conventional sparse autoencoder and nonnegativity-constrained autoencoders with MNIST digits recognition, NORB normalized-uniform object data and Yale face dataset. Copyright © 2017 Elsevier Ltd. All rights reserved.
An opinion formation based binary optimization approach for feature selection
NASA Astrophysics Data System (ADS)
Hamedmoghadam, Homayoun; Jalili, Mahdi; Yu, Xinghuo
2018-02-01
This paper proposed a novel optimization method based on opinion formation in complex network systems. The proposed optimization technique mimics human-human interaction mechanism based on a mathematical model derived from social sciences. Our method encodes a subset of selected features to the opinion of an artificial agent and simulates the opinion formation process among a population of agents to solve the feature selection problem. The agents interact using an underlying interaction network structure and get into consensus in their opinions, while finding better solutions to the problem. A number of mechanisms are employed to avoid getting trapped in local minima. We compare the performance of the proposed method with a number of classical population-based optimization methods and a state-of-the-art opinion formation based method. Our experiments on a number of high dimensional datasets reveal outperformance of the proposed algorithm over others.
USDA-ARS?s Scientific Manuscript database
Recent developments in spectrally encoded microspheres (SEMs)-based technologies provide high multiplexing possibilities. Most SEMs-based assays required a flow cytometer with sophisticated fluidics and optics. The new imaging superparamagnetic SEMs-based platform transports SEMs with considerably ...
Oltra-Cucarella, J; Pérez-Elvira, R; Duque, P
2014-06-01
the aim of this study is to test the encoding deficit hypothesis in Alzheimer disease (AD) using a recent method for correcting memory tests. To this end, a Spanish-language adaptation of the Free and Cued Selective Reminding Test was interpreted using the Item Specific Deficit Approach (ISDA), which provides three indices: Encoding Deficit Index, Consolidation Deficit Index, and Retrieval Deficit Index. We compared the performances of 15 patients with AD and 20 healthy control subjects and analysed results using either the task instructions or the ISDA approach. patients with AD displayed deficient encoding of more than half the information, but items that were encoded properly could be retrieved later with the help of the same semantic clues provided individually during encoding. Virtually all the information retained over the long-term was retrieved by using semantic clues. Encoding was shown to be the most impaired process, followed by retrieval and consolidation. Discriminant function analyses showed that ISDA indices are more sensitive and specific for detecting memory impairments in AD than are raw scores. These results indicate that patients with AD present impaired information encoding, but they benefit from semantic hints that help them recover previously learned information. This should be taken into account for intervention techniques focusing on memory impairments in AD. Copyright © 2013 Sociedad Española de Neurología. Published by Elsevier Espana. All rights reserved.
Spectrally-encoded color imaging
Kang, DongKyun; Yelin, Dvir; Bouma, Brett E.; Tearney, Guillermo J.
2010-01-01
Spectrally-encoded endoscopy (SEE) is a technique for ultraminiature endoscopy that encodes each spatial location on the sample with a different wavelength. One limitation of previous incarnations of SEE is that it inherently creates monochromatic images, since the spectral bandwidth is expended in the spatial encoding process. Here we present a spectrally-encoded imaging system that has color imaging capability. The new imaging system utilizes three distinct red, green, and blue spectral bands that are configured to illuminate the grating at different incident angles. By careful selection of the incident angles, the three spectral bands can be made to overlap on the sample. To demonstrate the method, a bench-top system was built, comprising a 2400-lpmm grating illuminated by three 525-μm-diameter beams with three different spectral bands. Each spectral band had a bandwidth of 75 nm, producing 189 resolvable points. A resolution target, color phantoms, and excised swine small intestine were imaged to validate the system's performance. The color SEE system showed qualitatively and quantitatively similar color imaging performance to that of a conventional digital camera. PMID:19688002
Practical somewhat-secure quantum somewhat-homomorphic encryption with coherent states
NASA Astrophysics Data System (ADS)
Tan, Si-Hui; Ouyang, Yingkai; Rohde, Peter P.
2018-04-01
We present a scheme for implementing homomorphic encryption on coherent states encoded using phase-shift keys. The encryption operations require only rotations in phase space, which commute with computations in the code space performed via passive linear optics, and with generalized nonlinear phase operations that are polynomials of the photon-number operator in the code space. This encoding scheme can thus be applied to any computation with coherent-state inputs, and the computation proceeds via a combination of passive linear optics and generalized nonlinear phase operations. An example of such a computation is matrix multiplication, whereby a vector representing coherent-state amplitudes is multiplied by a matrix representing a linear optics network, yielding a new vector of coherent-state amplitudes. By finding an orthogonal partitioning of the support of our encoded states, we quantify the security of our scheme via the indistinguishability of the encrypted code words. While we focus on coherent-state encodings, we expect that this phase-key encoding technique could apply to any continuous-variable computation scheme where the phase-shift operator commutes with the computation.
Multi-protocol header generation system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roberts, David A.; Ignatowski, Michael; Jayasena, Nuwan
A communication device includes a data source that generates data for transmission over a bus, and a data encoder that receives and encodes outgoing data. An encoder system receives outgoing data from a data source and stores the outgoing data in a first queue. An encoder encodes outgoing data with a header type that is based upon a header type indication from a controller and stores the encoded data that may be a packet or a data word with at least one layered header in a second queue for transmission. The device is configured to receive at a payload extractor,more » a packet protocol change command from the controller and to remove the encoded data and to re-encode the data to create a re-encoded data packet and placing the re-encoded data packet in the second queue for transmission.« less
Implementation of trinary logic in a polarization encoded optical shadow-casting scheme.
Rizvi, R A; Zaheer, K; Zubairy, M S
1991-03-10
The design of various multioutput trinary combinational logic units by a polarization encoded optical shadow-casting (POSC) technique is presented. The POSC modified algorithm is employed to design and implement these logic elements in a trinary number system with separate and simultaneous generation of outputs. A detailed solution of the POSC logic equations for a fixed source plane and a fixed decoding mask is given to obtain input pixel coding for a trinary half-adder, full adder, and subtractor.
Jørgensen, P L; Tangney, M; Pedersen, P E; Hastrup, S; Diderichsen, B; Jørgensen, S T
2000-02-01
A gene encoding an alkaline protease was cloned from an alkalophilic bacillus, and its nucleotide sequence was determined. The cloned gene was used to increase the copy number of the protease gene on the chromosome by an improved gene amplification technique.
NASA Astrophysics Data System (ADS)
Pal, Amrindra; Kumar, Santosh; Sharma, Sandeep; Raghuwanshi, Sanjeev K.
2016-04-01
Encoder is a device that allows placing digital information from many inputs to many outputs. Any application of combinational logic circuit can be implemented by using encoder and external gates. In this paper, 4 to 2 line encoder is proposed using electro-optic effect inside lithium-niobate based Mach-Zehnder interferometers (MZIs). The MZI structures have powerful capability to switching an optical input signal to a desired output port. The paper constitutes a mathematical description of the proposed device and thereafter simulation using MATLAB. The study is verified using beam propagation method (BPM).
Teasing apart retrieval and encoding interference in the processing of anaphors
Jäger, Lena A.; Benz, Lena; Roeser, Jens; Dillon, Brian W.; Vasishth, Shravan
2015-01-01
Two classes of account have been proposed to explain the memory processes subserving the processing of reflexive-antecedent dependencies. Structure-based accounts assume that the retrieval of the antecedent is guided by syntactic tree-configurational information without considering other kinds of information such as gender marking in the case of English reflexives. By contrast, unconstrained cue-based retrieval assumes that all available information is used for retrieving the antecedent. Similarity-based interference effects from structurally illicit distractors which match a non-structural retrieval cue have been interpreted as evidence favoring the unconstrained cue-based retrieval account since cue-based retrieval interference from structurally illicit distractors is incompatible with the structure-based account. However, it has been argued that the observed effects do not necessarily reflect interference occurring at the moment of retrieval but might equally well be accounted for by interference occurring already at the stage of encoding or maintaining the antecedent in memory, in which case they cannot be taken as evidence against the structure-based account. We present three experiments (self-paced reading and eye-tracking) on German reflexives and Swedish reflexive and pronominal possessives in which we pit the predictions of encoding interference and cue-based retrieval interference against each other. We could not find any indication that encoding interference affects the processing ease of the reflexive-antecedent dependency formation. Thus, there is no evidence that encoding interference might be the explanation for the interference effects observed in previous work. We therefore conclude that invoking encoding interference may not be a plausible way to reconcile interference effects with a structure-based account of reflexive processing. PMID:26106337
Visual short-term memory: activity supporting encoding and maintenance in retinotopic visual cortex.
Sneve, Markus H; Alnæs, Dag; Endestad, Tor; Greenlee, Mark W; Magnussen, Svein
2012-10-15
Recent studies have demonstrated that retinotopic cortex maintains information about visual stimuli during retention intervals. However, the process by which transient stimulus-evoked sensory responses are transformed into enduring memory representations is unknown. Here, using fMRI and short-term visual memory tasks optimized for univariate and multivariate analysis approaches, we report differential involvement of human retinotopic areas during memory encoding of the low-level visual feature orientation. All visual areas show weaker responses when memory encoding processes are interrupted, possibly due to effects in orientation-sensitive primary visual cortex (V1) propagating across extrastriate areas. Furthermore, intermediate areas in both dorsal (V3a/b) and ventral (LO1/2) streams are significantly more active during memory encoding compared with non-memory (active and passive) processing of the same stimulus material. These effects in intermediate visual cortex are also observed during memory encoding of a different stimulus feature (spatial frequency), suggesting that these areas are involved in encoding processes on a higher level of representation. Using pattern-classification techniques to probe the representational content in visual cortex during delay periods, we further demonstrate that simply initiating memory encoding is not sufficient to produce long-lasting memory traces. Rather, active maintenance appears to underlie the observed memory-specific patterns of information in retinotopic cortex. Copyright © 2012 Elsevier Inc. All rights reserved.
Novel selection methods for DNA-encoded chemical libraries.
Chan, Alix I; McGregor, Lynn M; Liu, David R
2015-06-01
Driven by the need for new compounds to serve as biological probes and leads for therapeutic development and the growing accessibility of DNA technologies including high-throughput sequencing, many academic and industrial groups have begun to use DNA-encoded chemical libraries as a source of bioactive small molecules. In this review, we describe the technologies that have enabled the selection of compounds with desired activities from these libraries. These methods exploit the sensitivity of in vitro selection coupled with DNA amplification to overcome some of the limitations and costs associated with conventional screening methods. In addition, we highlight newer techniques with the potential to be applied to the high-throughput evaluation of DNA-encoded chemical libraries. Copyright © 2015 Elsevier Ltd. All rights reserved.
Ilk, Nicola; Völlenkle, Christine; Egelseer, Eva M.; Breitwieser, Andreas; Sleytr, Uwe B.; Sára, Margit
2002-01-01
The nucleotide sequence encoding the crystalline bacterial cell surface (S-layer) protein SbpA of Bacillus sphaericus CCM 2177 was determined by a PCR-based technique using four overlapping fragments. The entire sbpA sequence indicated one open reading frame of 3,804 bp encoding a protein of 1,268 amino acids with a theoretical molecular mass of 132,062 Da and a calculated isoelectric point of 4.69. The N-terminal part of SbpA, which is involved in anchoring the S-layer subunits via a distinct type of secondary cell wall polymer to the rigid cell wall layer, comprises three S-layer-homologous motifs. For screening of amino acid positions located on the outer surface of the square S-layer lattice, the sequence encoding Strep-tag I, showing affinity to streptavidin, was linked to the 5′ end of the sequence encoding the recombinant S-layer protein (rSbpA) or a C-terminally truncated form (rSbpA31-1068). The deletion of 200 C-terminal amino acids did not interfere with the self-assembly properties of the S-layer protein but significantly increased the accessibility of Strep-tag I. Thus, the sequence encoding the major birch pollen allergen (Bet v1) was fused via a short linker to the sequence encoding the C-terminally truncated form rSpbA31-1068. Labeling of the square S-layer lattice formed by recrystallization of rSbpA31-1068/Bet v1 on peptidoglycan-containing sacculi with a Bet v1-specific monoclonal mouse antibody demonstrated the functionality of the fused protein sequence and its location on the outer surface of the S-layer lattice. The specific interactions between the N-terminal part of SbpA and the secondary cell wall polymer will be exploited for an oriented binding of the S-layer fusion protein on solid supports to generate regularly structured functional protein lattices. PMID:12089001
Huff, Mark J; Bodner, Glen E; Fawcett, Jonathan M
2015-04-01
We review and meta-analyze how distinctive encoding alters encoding and retrieval processes and, thus, affects correct and false recognition in the Deese-Roediger-McDermott (DRM) paradigm. Reductions in false recognition following distinctive encoding (e.g., generation), relative to a nondistinctive read-only control condition, reflected both impoverished relational encoding and use of a retrieval-based distinctiveness heuristic. Additional analyses evaluated the costs and benefits of distinctive encoding in within-subjects designs relative to between-group designs. Correct recognition was design independent, but in a within design, distinctive encoding was less effective at reducing false recognition for distinctively encoded lists but more effective for nondistinctively encoded lists. Thus, distinctive encoding is not entirely "cost free" in a within design. In addition to delineating the conditions that modulate the effects of distinctive encoding on recognition accuracy, we discuss the utility of using signal detection indices of memory information and memory monitoring at test to separate encoding and retrieval processes.
Release From Proactive Interference with Young Children
ERIC Educational Resources Information Center
Cann, Linda F.; And Others
1973-01-01
This demonstration of release from proactive interference with young children confirms the suggestion that the technique is appropriate for the study of developmental changes in the encoding of information. (Authors/CB)
Quantitative DLA-based compressed sensing for T1-weighted acquisitions
NASA Astrophysics Data System (ADS)
Svehla, Pavel; Nguyen, Khieu-Van; Li, Jing-Rebecca; Ciobanu, Luisa
2017-08-01
High resolution Manganese Enhanced Magnetic Resonance Imaging (MEMRI), which uses manganese as a T1 contrast agent, has great potential for functional imaging of live neuronal tissue at single neuron scale. However, reaching high resolutions often requires long acquisition times which can lead to reduced image quality due to sample deterioration and hardware instability. Compressed Sensing (CS) techniques offer the opportunity to significantly reduce the imaging time. The purpose of this work is to test the feasibility of CS acquisitions based on Diffusion Limited Aggregation (DLA) sampling patterns for high resolution quantitative T1-weighted imaging. Fully encoded and DLA-CS T1-weighted images of Aplysia californica neural tissue were acquired on a 17.2T MRI system. The MR signal corresponding to single, identified neurons was quantified for both versions of the T1 weighted images. For a 50% undersampling, DLA-CS can accurately quantify signal intensities in T1-weighted acquisitions leading to only 1.37% differences when compared to the fully encoded data, with minimal impact on image spatial resolution. In addition, we compared the conventional polynomial undersampling scheme with the DLA and showed that, for the data at hand, the latter performs better. Depending on the image signal to noise ratio, higher undersampling ratios can be used to further reduce the acquisition time in MEMRI based functional studies of living tissues.
iSS-PC: Identifying Splicing Sites via Physical-Chemical Properties Using Deep Sparse Auto-Encoder.
Xu, Zhao-Chun; Wang, Peng; Qiu, Wang-Ren; Xiao, Xuan
2017-08-15
Gene splicing is one of the most significant biological processes in eukaryotic gene expression, such as RNA splicing, which can cause a pre-mRNA to produce one or more mature messenger RNAs containing the coded information with multiple biological functions. Thus, identifying splicing sites in DNA/RNA sequences is significant for both the bio-medical research and the discovery of new drugs. However, it is expensive and time consuming based only on experimental technique, so new computational methods are needed. To identify the splice donor sites and splice acceptor sites accurately and quickly, a deep sparse auto-encoder model with two hidden layers, called iSS-PC, was constructed based on minimum error law, in which we incorporated twelve physical-chemical properties of the dinucleotides within DNA into PseDNC to formulate given sequence samples via a battery of cross-covariance and auto-covariance transformations. In this paper, five-fold cross-validation test results based on the same benchmark data-sets indicated that the new predictor remarkably outperformed the existing prediction methods in this field. Furthermore, it is expected that many other related problems can be also studied by this approach. To implement classification accurately and quickly, an easy-to-use web-server for identifying slicing sites has been established for free access at: http://www.jci-bioinfo.cn/iSS-PC.
Moerel, Michelle; De Martino, Federico; Kemper, Valentin G; Schmitter, Sebastian; Vu, An T; Uğurbil, Kâmil; Formisano, Elia; Yacoub, Essa
2018-01-01
Following rapid technological advances, ultra-high field functional MRI (fMRI) enables exploring correlates of neuronal population activity at an increasing spatial resolution. However, as the fMRI blood-oxygenation-level-dependent (BOLD) contrast is a vascular signal, the spatial specificity of fMRI data is ultimately determined by the characteristics of the underlying vasculature. At 7T, fMRI measurement parameters determine the relative contribution of the macro- and microvasculature to the acquired signal. Here we investigate how these parameters affect relevant high-end fMRI analyses such as encoding, decoding, and submillimeter mapping of voxel preferences in the human auditory cortex. Specifically, we compare a T 2 * weighted fMRI dataset, obtained with 2D gradient echo (GE) EPI, to a predominantly T 2 weighted dataset obtained with 3D GRASE. We first investigated the decoding accuracy based on two encoding models that represented different hypotheses about auditory cortical processing. This encoding/decoding analysis profited from the large spatial coverage and sensitivity of the T 2 * weighted acquisitions, as evidenced by a significantly higher prediction accuracy in the GE-EPI dataset compared to the 3D GRASE dataset for both encoding models. The main disadvantage of the T 2 * weighted GE-EPI dataset for encoding/decoding analyses was that the prediction accuracy exhibited cortical depth dependent vascular biases. However, we propose that the comparison of prediction accuracy across the different encoding models may be used as a post processing technique to salvage the spatial interpretability of the GE-EPI cortical depth-dependent prediction accuracy. Second, we explored the mapping of voxel preferences. Large-scale maps of frequency preference (i.e., tonotopy) were similar across datasets, yet the GE-EPI dataset was preferable due to its larger spatial coverage and sensitivity. However, submillimeter tonotopy maps revealed biases in assigned frequency preference and selectivity for the GE-EPI dataset, but not for the 3D GRASE dataset. Thus, a T 2 weighted acquisition is recommended if high specificity in tonotopic maps is required. In conclusion, different fMRI acquisitions were better suited for different analyses. It is therefore critical that any sequence parameter optimization considers the eventual intended fMRI analyses and the nature of the neuroscience questions being asked. Copyright © 2017 Elsevier Inc. All rights reserved.
Palmer, J E; Dikeman, D A; Fujinuma, T; Kim, B; Jones, J I; Denda, M; Martínez-Zapater, J M; Cruz-Alvarez, M
2001-04-01
The species Brassica oleracea includes several agricultural varieties characterized by the proliferation of different types of meristems. Using a combination of subtractive hybridization and PCR (polymerase chain reaction) techniques we have identified several genes which are expressed in the reproductive meristems of the cauliflower curd (B. oleracea var. botrytis) but not in the vegetative meristems of Brussels sprouts (B. oleracea var. gemmifera) axillary buds. One of the cloned genes, termed CCE1 (CAULIFLOWER CURD EXPRESSION 1) shows specific expression in the botrytis variety. Preferential expression takes place in this variety in the meristems of the curd and in the stem throughout the vegetative and reproductive stages of plant growth. CCE1 transcripts are not detected in any of the organs of other B. oleracea varieties analyzed. Based on the nucleotide sequence of a cDNA encompassing the complete coding region, we predict that this gene encodes a transmembrane protein, with three transmembrane domains. The deduced amino acid sequence includes motifs conserved in G-protein-coupled receptors (GPCRs) from yeast and animal species. Our results suggest that the cloned gene encodes a protein belonging to a new, so far unidentified, family of transmembrane receptors in plants. The expression pattern of the gene suggests that the receptor may be involved in the control of meristem development/arrest that takes place in cauliflower.
A Bayesian model for highly accelerated phase-contrast MRI.
Rich, Adam; Potter, Lee C; Jin, Ning; Ash, Joshua; Simonetti, Orlando P; Ahmad, Rizwan
2016-08-01
Phase-contrast magnetic resonance imaging is a noninvasive tool to assess cardiovascular disease by quantifying blood flow; however, low data acquisition efficiency limits the spatial and temporal resolutions, real-time application, and extensions to four-dimensional flow imaging in clinical settings. We propose a new data processing approach called Reconstructing Velocity Encoded MRI with Approximate message passing aLgorithms (ReVEAL) that accelerates the acquisition by exploiting data structure unique to phase-contrast magnetic resonance imaging. The proposed approach models physical correlations across space, time, and velocity encodings. The proposed Bayesian approach exploits the relationships in both magnitude and phase among velocity encodings. A fast iterative recovery algorithm is introduced based on message passing. For validation, prospectively undersampled data are processed from a pulsatile flow phantom and five healthy volunteers. The proposed approach is in good agreement, quantified by peak velocity and stroke volume (SV), with reference data for acceleration rates R≤10. For SV, Pearson r≥0.99 for phantom imaging (n = 24) and r≥0.96 for prospectively accelerated in vivo imaging (n = 10) for R≤10. The proposed approach enables accurate quantification of blood flow from highly undersampled data. The technique is extensible to four-dimensional flow imaging, where higher acceleration may be possible due to additional redundancy. Magn Reson Med 76:689-701, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Berg, Eric; Roncali, Emilie; Hutchcroft, Will; Qi, Jinyi; Cherry, Simon R.
2016-01-01
In a scintillation detector, the light generated in the scintillator by a gamma interaction is converted to photoelectrons by a photodetector and produces a time-dependent waveform, the shape of which depends on the scintillator properties and the photodetector response. Several depth-of-interaction (DOI) encoding strategies have been developed that manipulate the scintillator’s temporal response along the crystal length and therefore require pulse shape discrimination techniques to differentiate waveform shapes. In this work, we demonstrate how maximum likelihood (ML) estimation methods can be applied to pulse shape discrimination to better estimate deposited energy, DOI and interaction time (for time-of-flight (TOF) PET) of a gamma ray in a scintillation detector. We developed likelihood models based on either the estimated detection times of individual photoelectrons or the number of photoelectrons in discrete time bins, and applied to two phosphor-coated crystals (LFS and LYSO) used in a previously developed TOF-DOI detector concept. Compared with conventional analytical methods, ML pulse shape discrimination improved DOI encoding by 27% for both crystals. Using the ML DOI estimate, we were able to counter depth-dependent changes in light collection inherent to long scintillator crystals and recover the energy resolution measured with fixed depth irradiation (~11.5% for both crystals). Lastly, we demonstrated how the Richardson-Lucy algorithm, an iterative, ML-based deconvolution technique, can be applied to the digitized waveforms to deconvolve the photodetector’s single photoelectron response and produce waveforms with a faster rising edge. After deconvolution and applying DOI and time-walk corrections, we demonstrated a 13% improvement in coincidence timing resolution (from 290 to 254 ps) with the LFS crystal and an 8% improvement (323 to 297 ps) with the LYSO crystal. PMID:27295658
Berg, Eric; Roncali, Emilie; Hutchcroft, Will; Qi, Jinyi; Cherry, Simon R
2016-11-01
In a scintillation detector, the light generated in the scintillator by a gamma interaction is converted to photoelectrons by a photodetector and produces a time-dependent waveform, the shape of which depends on the scintillator properties and the photodetector response. Several depth-of-interaction (DOI) encoding strategies have been developed that manipulate the scintillator's temporal response along the crystal length and therefore require pulse shape discrimination techniques to differentiate waveform shapes. In this work, we demonstrate how maximum likelihood (ML) estimation methods can be applied to pulse shape discrimination to better estimate deposited energy, DOI and interaction time (for time-of-flight (TOF) PET) of a gamma ray in a scintillation detector. We developed likelihood models based on either the estimated detection times of individual photoelectrons or the number of photoelectrons in discrete time bins, and applied to two phosphor-coated crystals (LFS and LYSO) used in a previously developed TOF-DOI detector concept. Compared with conventional analytical methods, ML pulse shape discrimination improved DOI encoding by 27% for both crystals. Using the ML DOI estimate, we were able to counter depth-dependent changes in light collection inherent to long scintillator crystals and recover the energy resolution measured with fixed depth irradiation (~11.5% for both crystals). Lastly, we demonstrated how the Richardson-Lucy algorithm, an iterative, ML-based deconvolution technique, can be applied to the digitized waveforms to deconvolve the photodetector's single photoelectron response and produce waveforms with a faster rising edge. After deconvolution and applying DOI and time-walk corrections, we demonstrated a 13% improvement in coincidence timing resolution (from 290 to 254 ps) with the LFS crystal and an 8% improvement (323 to 297 ps) with the LYSO crystal.
Kuhl, Brice A.; Rissman, Jesse; Wagner, Anthony D.
2012-01-01
Successful encoding of episodic memories is thought to depend on contributions from prefrontal and temporal lobe structures. Neural processes that contribute to successful encoding have been extensively explored through univariate analyses of neuroimaging data that compare mean activity levels elicited during the encoding of events that are subsequently remembered vs. those subsequently forgotten. Here, we applied pattern classification to fMRI data to assess the degree to which distributed patterns of activity within prefrontal and temporal lobe structures elicited during the encoding of word-image pairs were diagnostic of the visual category (Face or Scene) of the encoded image. We then assessed whether representation of category information was predictive of subsequent memory. Classification analyses indicated that temporal lobe structures contained information robustly diagnostic of visual category. Information in prefrontal cortex was less diagnostic of visual category, but was nonetheless associated with highly reliable classifier-based evidence for category representation. Critically, trials associated with greater classifier-based estimates of category representation in temporal and prefrontal regions were associated with a higher probability of subsequent remembering. Finally, consideration of trial-by-trial variance in classifier-based measures of category representation revealed positive correlations between prefrontal and temporal lobe representations, with the strength of these correlations varying as a function of the category of image being encoded. Together, these results indicate that multi-voxel representations of encoded information can provide unique insights into how visual experiences are transformed into episodic memories. PMID:21925190
Multiple-stage pure phase encoding with biometric information
NASA Astrophysics Data System (ADS)
Chen, Wen
2018-01-01
In recent years, many optical systems have been developed for securing information, and optical encryption/encoding has attracted more and more attention due to the marked advantages, such as parallel processing and multiple-dimensional characteristics. In this paper, an optical security method is presented based on pure phase encoding with biometric information. Biometric information (such as fingerprint) is employed as security keys rather than plaintext used in conventional optical security systems, and multiple-stage phase-encoding-based optical systems are designed for generating several phase-only masks with biometric information. Subsequently, the extracted phase-only masks are further used in an optical setup for encoding an input image (i.e., plaintext). Numerical simulations are conducted to illustrate the validity, and the results demonstrate that high flexibility and high security can be achieved.
Encoders for block-circulant LDPC codes
NASA Technical Reports Server (NTRS)
Andrews, Kenneth; Dolinar, Sam; Thorpe, Jeremy
2005-01-01
In this paper, we present two encoding methods for block-circulant LDPC codes. The first is an iterative encoding method based on the erasure decoding algorithm, and the computations required are well organized due to the block-circulant structure of the parity check matrix. The second method uses block-circulant generator matrices, and the encoders are very similar to those for recursive convolutional codes. Some encoders of the second type have been implemented in a small Field Programmable Gate Array (FPGA) and operate at 100 Msymbols/second.
Investigations of interpolation errors of angle encoders for high precision angle metrology
NASA Astrophysics Data System (ADS)
Yandayan, Tanfer; Geckeler, Ralf D.; Just, Andreas; Krause, Michael; Asli Akgoz, S.; Aksulu, Murat; Grubert, Bernd; Watanabe, Tsukasa
2018-06-01
Interpolation errors at small angular scales are caused by the subdivision of the angular interval between adjacent grating lines into smaller intervals when radial gratings are used in angle encoders. They are often a major error source in precision angle metrology and better approaches for determining them at low levels of uncertainty are needed. Extensive investigations of interpolation errors of different angle encoders with various interpolators and interpolation schemes were carried out by adapting the shearing method to the calibration of autocollimators with angle encoders. The results of the laboratories with advanced angle metrology capabilities are presented which were acquired by the use of four different high precision angle encoders/interpolators/rotary tables. State of the art uncertainties down to 1 milliarcsec (5 nrad) were achieved for the determination of the interpolation errors using the shearing method which provides simultaneous access to the angle deviations of the autocollimator and of the angle encoder. Compared to the calibration and measurement capabilities (CMC) of the participants for autocollimators, the use of the shearing technique represents a substantial improvement in the uncertainty by a factor of up to 5 in addition to the precise determination of interpolation errors or their residuals (when compensated). A discussion of the results is carried out in conjunction with the equipment used.
Multi-pass encoding of hyperspectral imagery with spectral quality control
NASA Astrophysics Data System (ADS)
Wasson, Steven; Walker, William
2015-05-01
Multi-pass encoding is a technique employed in the field of video compression that maximizes the quality of an encoded video sequence within the constraints of a specified bit rate. This paper presents research where multi-pass encoding is extended to the field of hyperspectral image compression. Unlike video, which is primarily intended to be viewed by a human observer, hyperspectral imagery is processed by computational algorithms that generally attempt to classify the pixel spectra within the imagery. As such, these algorithms are more sensitive to distortion in the spectral dimension of the image than they are to perceptual distortion in the spatial dimension. The compression algorithm developed for this research, which uses the Karhunen-Loeve transform for spectral decorrelation followed by a modified H.264/Advanced Video Coding (AVC) encoder, maintains a user-specified spectral quality level while maximizing the compression ratio throughout the encoding process. The compression performance may be considered near-lossless in certain scenarios. For qualitative purposes, this paper presents the performance of the compression algorithm for several Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Hyperion datasets using spectral angle as the spectral quality assessment function. Specifically, the compression performance is illustrated in the form of rate-distortion curves that plot spectral angle versus bits per pixel per band (bpppb).
A novel approach of an absolute coding pattern based on Hamiltonian graph
NASA Astrophysics Data System (ADS)
Wang, Ya'nan; Wang, Huawei; Hao, Fusheng; Liu, Liqiang
2017-02-01
In this paper, a novel approach of an optical type absolute rotary encoder coding pattern is presented. The concept is based on the principle of the absolute encoder to find out a unique sequence that ensures an unambiguous shaft position of any angular. We design a single-ring and a n-by-2 matrix absolute encoder coding pattern by using the variations of Hamiltonian graph principle. 12 encoding bits is used in the single-ring by a linear array CCD to achieve an 1080-position cycle encoding. Besides, a 2-by-2 matrix is used as an unit in the 2-track disk to achieve a 16-bits encoding pattern by using an area array CCD sensor (as a sample). Finally, a higher resolution can be gained by an electronic subdivision of the signals. Compared with the conventional gray or binary code pattern (for a 2n resolution), this new pattern has a higher resolution (2n*n) with less coding tracks, which means the new pattern can lead to a smaller encoder, which is essential in the industrial production.
Multi-Level Sequential Pattern Mining Based on Prime Encoding
NASA Astrophysics Data System (ADS)
Lianglei, Sun; Yun, Li; Jiang, Yin
Encoding is not only to express the hierarchical relationship, but also to facilitate the identification of the relationship between different levels, which will directly affect the efficiency of the algorithm in the area of mining the multi-level sequential pattern. In this paper, we prove that one step of division operation can decide the parent-child relationship between different levels by using prime encoding and present PMSM algorithm and CROSS-PMSM algorithm which are based on prime encoding for mining multi-level sequential pattern and cross-level sequential pattern respectively. Experimental results show that the algorithm can effectively extract multi-level and cross-level sequential pattern from the sequence database.
Mori, Yasuo; Miyata, Jun; Isobe, Masanori; Son, Shuraku; Yoshihara, Yujiro; Aso, Toshihiko; Kouchiyama, Takanori; Murai, Toshiya; Takahashi, Hidehiko
2018-05-17
Echo-planar imaging is a common technique used in functional magnetic resonance imaging (fMRI), however it suffers from image distortion and signal loss because of large susceptibility effects that are related to the phase-encoding direction of the scan. Despite this relationship, the majority of neuroimaging studies have not considered the influence of phase-encoding direction. Here, we aimed to clarify how phase-encoding direction can affect the outcome of an fMRI connectivity study of schizophrenia. Resting-state fMRI using anterior to posterior (A-P) and posterior to anterior (P-A) directions was used to examine 25 patients with schizophrenia (SC) and 37 matched healthy controls (HC). We conducted a functional connectivity analysis using independent component analysis and performed three group comparisons: A-P vs. P-A (all participants), SC vs. HC for the A-P and P-A datasets, and the interaction between phase-encoding direction and participant group. The estimated functional connectivity differed between the two phase-encoding directions in areas that were more extensive than those where signal loss has been reported. Although functional connectivity in the SC group was lower than that in the HC group for both directions, the A-P and P-A conditions did not exhibit the same specific pattern of differences. Further, we observed an interaction between participant group and the phase-encoding direction in the left temporo-parietal junction and left fusiform gyrus. Phase-encoding direction can influence the results of functional connectivity studies. Thus, appropriate selection and documentation of phase-encoding direction will be important in future resting-state fMRI studies. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
A Rewritable, Random-Access DNA-Based Storage System.
Yazdi, S M Hossein Tabatabaei; Yuan, Yongbo; Ma, Jian; Zhao, Huimin; Milenkovic, Olgica
2015-09-18
We describe the first DNA-based storage architecture that enables random access to data blocks and rewriting of information stored at arbitrary locations within the blocks. The newly developed architecture overcomes drawbacks of existing read-only methods that require decoding the whole file in order to read one data fragment. Our system is based on new constrained coding techniques and accompanying DNA editing methods that ensure data reliability, specificity and sensitivity of access, and at the same time provide exceptionally high data storage capacity. As a proof of concept, we encoded parts of the Wikipedia pages of six universities in the USA, and selected and edited parts of the text written in DNA corresponding to three of these schools. The results suggest that DNA is a versatile media suitable for both ultrahigh density archival and rewritable storage applications.
NGL Viewer: Web-based molecular graphics for large complexes.
Rose, Alexander S; Bradley, Anthony R; Valasatava, Yana; Duarte, Jose M; Prlic, Andreas; Rose, Peter W
2018-05-29
The interactive visualization of very large macromolecular complexes on the web is becoming a challenging problem as experimental techniques advance at an unprecedented rate and deliver structures of increasing size. We have tackled this problem by developing highly memory-efficient and scalable extensions for the NGL WebGL-based molecular viewer and by using MMTF, a binary and compressed Macromolecular Transmission Format. These enable NGL to download and render molecular complexes with millions of atoms interactively on desktop computers and smartphones alike, making it a tool of choice for web-based molecular visualization in research and education. The source code is freely available under the MIT license at github.com/arose/ngl and distributed on NPM (npmjs.com/package/ngl). MMTF-JavaScript encoders and decoders are available at github.com/rcsb/mmtf-javascript. asr.moin@gmail.com.
A Rewritable, Random-Access DNA-Based Storage System
NASA Astrophysics Data System (ADS)
Tabatabaei Yazdi, S. M. Hossein; Yuan, Yongbo; Ma, Jian; Zhao, Huimin; Milenkovic, Olgica
2015-09-01
We describe the first DNA-based storage architecture that enables random access to data blocks and rewriting of information stored at arbitrary locations within the blocks. The newly developed architecture overcomes drawbacks of existing read-only methods that require decoding the whole file in order to read one data fragment. Our system is based on new constrained coding techniques and accompanying DNA editing methods that ensure data reliability, specificity and sensitivity of access, and at the same time provide exceptionally high data storage capacity. As a proof of concept, we encoded parts of the Wikipedia pages of six universities in the USA, and selected and edited parts of the text written in DNA corresponding to three of these schools. The results suggest that DNA is a versatile media suitable for both ultrahigh density archival and rewritable storage applications.
Integrated source and channel encoded digital communication system design study
NASA Technical Reports Server (NTRS)
Huth, G. K.; Trumpis, B. D.; Udalov, S.
1975-01-01
Various aspects of space shuttle communication systems were studied. The following major areas were investigated: burst error correction for shuttle command channels; performance optimization and design considerations for Costas receivers with and without bandpass limiting; experimental techniques for measuring low level spectral components of microwave signals; and potential modulation and coding techniques for the Ku-band return link. Results are presented.
Al-Masaudi, Saad; El Kaoutari, Abdessamad; Drula, Elodie; Al-Mehdar, Hussein; Redwan, Elrashdy M; Lombard, Vincent; Henrissat, Bernard
2017-01-01
The digestive microbiota of humans and of a wide range of animals has recently become amenable to in-depth studies due to the emergence of DNA-based metagenomic techniques that do not require cultivation of gut microbes. These techniques are now commonly used to explore the feces of humans and animals under the assumption that such samples are faithful proxies for the intestinal microbiota. Sheep ( Ovis aries ) are ruminant animals particularly adapted to life in arid regions and in particular Najdi, Noaimi (Awassi), and Harrei (Harri) breeds that are raised in Saudi Arabia for milk and/or meat production. Here we report a metagenomics investigation of the distal digestive tract of one animal from each breed that (i) examines the microbiota at three intestinal subsites (small intestine, mid-colon, and rectum), (ii) performs an in-depth analysis of the carbohydrate-active enzymes genes encoded by the microbiota at the three subsites, and (iii) compares the microbiota and carbohydrate-active enzyme profile at the three subsites across the different breeds. For all animals we found that the small intestine is characterized by a lower taxonomic diversity than that of the large intestine and of the rectal samples. Mirroring this observation, we also find that the spectrum of encoded carbohydrate-active enzymes of the mid-colon and rectal sites is much richer than that of the small intestine. However, the number of encoded cellulases and xylanases in the various intestinal subsites was found to be surprisingly low, indicating that the bulk of the fiber digestion is performed upstream in the rumen, and that the carbon source for the intestinal flora is probably constituted of the rumen fungi and bacteria that pass in the intestines. In consequence we argue that ruminant feces, which are often analyzed for the search of microbial genes involved in plant cell wall degradation, are probably a poor proxy for the lignocellulolytic potential of the host.
Solving Connected Subgraph Problems in Wildlife Conservation
NASA Astrophysics Data System (ADS)
Dilkina, Bistra; Gomes, Carla P.
We investigate mathematical formulations and solution techniques for a variant of the Connected Subgraph Problem. Given a connected graph with costs and profits associated with the nodes, the goal is to find a connected subgraph that contains a subset of distinguished vertices. In this work we focus on the budget-constrained version, where we maximize the total profit of the nodes in the subgraph subject to a budget constraint on the total cost. We propose several mixed-integer formulations for enforcing the subgraph connectivity requirement, which plays a key role in the combinatorial structure of the problem. We show that a new formulation based on subtour elimination constraints is more effective at capturing the combinatorial structure of the problem, providing significant advantages over the previously considered encoding which was based on a single commodity flow. We test our formulations on synthetic instances as well as on real-world instances of an important problem in environmental conservation concerning the design of wildlife corridors. Our encoding results in a much tighter LP relaxation, and more importantly, it results in finding better integer feasible solutions as well as much better upper bounds on the objective (often proving optimality or within less than 1% of optimality), both when considering the synthetic instances as well as the real-world wildlife corridor instances.
Villand, P; Aalen, R; Olsen, O A; Lüthi, E; Lönneborg, A; Kleczkowski, L A
1992-06-01
Several cDNAs encoding the small and large subunit of ADP-glucose pyrophosphorylase (AGP) were isolated from total RNA of the starchy endosperm, roots and leaves of barley by polymerase chain reaction (PCR). Sets of degenerate oligonucleotide primers, based on previously published conserved amino acid sequences of plant AGP, were used for synthesis and amplification of the cDNAs. For either the endosperm, roots and leaves, the restriction analysis of PCR products (ca. 550 nucleotides each) has revealed heterogeneity, suggesting presence of three transcripts for AGP in the endosperm and roots, and up to two AGP transcripts in the leaf tissue. Based on the derived amino acid sequences, two clones from the endosperm, beps and bepl, were identified as coding for the small and large subunit of AGP, respectively, while a leaf transcript (blpl) encoded the putative large subunit of AGP. There was about 50% identity between the endosperm clones, and both of them were about 60% identical to the leaf cDNA. Northern blot analysis has indicated that beps and bepl are expressed in both the endosperm and roots, while blpl is detectable only in leaves. Application of the PCR technique in studies on gene structure and gene expression of plant AGP is discussed.
NASA Astrophysics Data System (ADS)
Izah Anuar, Nurul; Saptari, Adi
2016-02-01
This paper addresses the types of particle representation (encoding) procedures in a population-based stochastic optimization technique in solving scheduling problems known in the job-shop manufacturing environment. It intends to evaluate and compare the performance of different particle representation procedures in Particle Swarm Optimization (PSO) in the case of solving Job-shop Scheduling Problems (JSP). Particle representation procedures refer to the mapping between the particle position in PSO and the scheduling solution in JSP. It is an important step to be carried out so that each particle in PSO can represent a schedule in JSP. Three procedures such as Operation and Particle Position Sequence (OPPS), random keys representation and random-key encoding scheme are used in this study. These procedures have been tested on FT06 and FT10 benchmark problems available in the OR-Library, where the objective function is to minimize the makespan by the use of MATLAB software. Based on the experimental results, it is discovered that OPPS gives the best performance in solving both benchmark problems. The contribution of this paper is the fact that it demonstrates to the practitioners involved in complex scheduling problems that different particle representation procedures can have significant effects on the performance of PSO in solving JSP.
Minimum envelope roughness pulse design for reduced amplifier distortion in parallel excitation.
Grissom, William A; Kerr, Adam B; Stang, Pascal; Scott, Greig C; Pauly, John M
2010-11-01
Parallel excitation uses multiple transmit channels and coils, each driven by independent waveforms, to afford the pulse designer an additional spatial encoding mechanism that complements gradient encoding. In contrast to parallel reception, parallel excitation requires individual power amplifiers for each transmit channel, which can be cost prohibitive. Several groups have explored the use of low-cost power amplifiers for parallel excitation; however, such amplifiers commonly exhibit nonlinear memory effects that distort radio frequency pulses. This is especially true for pulses with rapidly varying envelopes, which are common in parallel excitation. To overcome this problem, we introduce a technique for parallel excitation pulse design that yields pulses with smoother envelopes. We demonstrate experimentally that pulses designed with the new technique suffer less amplifier distortion than unregularized pulses and pulses designed with conventional regularization.
Analysis of security of optical encryption with spatially incoherent illumination technique
NASA Astrophysics Data System (ADS)
Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Shifrina, Anna V.
2017-03-01
Applications of optical methods for encryption purposes have been attracting interest of researchers for decades. The first and the most popular is double random phase encoding (DRPE) technique. There are many optical encryption techniques based on DRPE. Main advantage of DRPE based techniques is high security due to transformation of spectrum of image to be encrypted into white spectrum via use of first phase random mask which allows for encrypted images with white spectra. Downsides are necessity of using holographic registration scheme in order to register not only light intensity distribution but also its phase distribution, and speckle noise occurring due to coherent illumination. Elimination of these disadvantages is possible via usage of incoherent illumination instead of coherent one. In this case, phase registration no longer matters, which means that there is no need for holographic setup, and speckle noise is gone. This technique does not have drawbacks inherent to coherent methods, however, as only light intensity distribution is considered, mean value of image to be encrypted is always above zero which leads to intensive zero spatial frequency peak in image spectrum. Consequently, in case of spatially incoherent illumination, image spectrum, as well as encryption key spectrum, cannot be white. This might be used to crack encryption system. If encryption key is very sparse, encrypted image might contain parts or even whole unhidden original image. Therefore, in this paper analysis of security of optical encryption with spatially incoherent illumination depending on encryption key size and density is conducted.
Protein Modelling: What Happened to the “Protein Structure Gap”?
Schwede, Torsten
2013-01-01
Computational modeling and prediction of three-dimensional macromolecular structures and complexes from their sequence has been a long standing vision in structural biology as it holds the promise to bypass part of the laborious process of experimental structure solution. Over the last two decades, a paradigm shift has occurred: starting from a situation where the “structure knowledge gap” between the huge number of protein sequences and small number of known structures has hampered the widespread use of structure-based approaches in life science research, today some form of structural information – either experimental or computational – is available for the majority of amino acids encoded by common model organism genomes. Template based homology modeling techniques have matured to a point where they are now routinely used to complement experimental techniques. With the scientific focus of interest moving towards larger macromolecular complexes and dynamic networks of interactions, the integration of computational modeling methods with low-resolution experimental techniques allows studying large and complex molecular machines. Computational modeling and prediction techniques are still facing a number of challenges which hamper the more widespread use by the non-expert scientist. For example, it is often difficult to convey the underlying assumptions of a computational technique, as well as the expected accuracy and structural variability of a specific model. However, these aspects are crucial to understand the limitations of a model, and to decide which interpretations and conclusions can be supported. PMID:24010712
Johnson, B R; Columbro, F; Araujo, D; Limon, M; Smiley, B; Jones, G; Reichborn-Kjennerud, B; Miller, A; Gupta, S
2017-10-01
In this paper, we present the design and measured performance of a novel cryogenic motor based on a superconducting magnetic bearing (SMB). The motor is tailored for use in millimeter-wave half-wave plate (HWP) polarimeters, where a HWP is rapidly rotated in front of a polarization analyzer or polarization-sensitive detector. This polarimetry technique is commonly used in cosmic microwave background polarization studies. The SMB we use is composed of fourteen yttrium barium copper oxide (YBCO) disks and a contiguous neodymium iron boron (NdFeB) ring magnet. The motor is a hollow-shaft motor because the HWP is ultimately installed in the rotor. The motor presented here has a 100 mm diameter rotor aperture. However, the design can be scaled up to rotor aperture diameters of approximately 500 mm. Our motor system is composed of four primary subsystems: (i) the rotor assembly, which includes the NdFeB ring magnet, (ii) the stator assembly, which includes the YBCO disks, (iii) an incremental encoder, and (iv) the drive electronics. While the YBCO is cooling through its superconducting transition, the rotor is held above the stator by a novel hold and release mechanism. The encoder subsystem consists of a custom-built encoder disk read out by two fiber optic readout sensors. For the demonstration described in this paper, we ran the motor at 50 K and tested rotation frequencies up to approximately 10 Hz. The feedback system was able to stabilize the rotation speed to approximately 0.4%, and the measured rotor orientation angle uncertainty is less than 0.15°. Lower temperature operation will require additional development activities, which we will discuss.
Semi-supervised tracking of extreme weather events in global spatio-temporal climate datasets
NASA Astrophysics Data System (ADS)
Kim, S. K.; Prabhat, M.; Williams, D. N.
2017-12-01
Deep neural networks have been successfully applied to solve problem to detect extreme weather events in large scale climate datasets and attend superior performance that overshadows all previous hand-crafted methods. Recent work has shown that multichannel spatiotemporal encoder-decoder CNN architecture is able to localize events in semi-supervised bounding box. Motivated by this work, we propose new learning metric based on Variational Auto-Encoders (VAE) and Long-Short-Term-Memory (LSTM) to track extreme weather events in spatio-temporal dataset. We consider spatio-temporal object tracking problems as learning probabilistic distribution of continuous latent features of auto-encoder using stochastic variational inference. For this, we assume that our datasets are i.i.d and latent features is able to be modeled by Gaussian distribution. In proposed metric, we first train VAE to generate approximate posterior given multichannel climate input with an extreme climate event at fixed time. Then, we predict bounding box, location and class of extreme climate events using convolutional layers given input concatenating three features including embedding, sampled mean and standard deviation. Lastly, we train LSTM with concatenated input to learn timely information of dataset by recurrently feeding output back to next time-step's input of VAE. Our contribution is two-fold. First, we show the first semi-supervised end-to-end architecture based on VAE to track extreme weather events which can apply to massive scaled unlabeled climate datasets. Second, the information of timely movement of events is considered for bounding box prediction using LSTM which can improve accuracy of localization. To our knowledge, this technique has not been explored neither in climate community or in Machine Learning community.
NASA Astrophysics Data System (ADS)
Johnson, B. R.; Columbro, F.; Araujo, D.; Limon, M.; Smiley, B.; Jones, G.; Reichborn-Kjennerud, B.; Miller, A.; Gupta, S.
2017-10-01
In this paper, we present the design and measured performance of a novel cryogenic motor based on a superconducting magnetic bearing (SMB). The motor is tailored for use in millimeter-wave half-wave plate (HWP) polarimeters, where a HWP is rapidly rotated in front of a polarization analyzer or polarization-sensitive detector. This polarimetry technique is commonly used in cosmic microwave background polarization studies. The SMB we use is composed of fourteen yttrium barium copper oxide (YBCO) disks and a contiguous neodymium iron boron (NdFeB) ring magnet. The motor is a hollow-shaft motor because the HWP is ultimately installed in the rotor. The motor presented here has a 100 mm diameter rotor aperture. However, the design can be scaled up to rotor aperture diameters of approximately 500 mm. Our motor system is composed of four primary subsystems: (i) the rotor assembly, which includes the NdFeB ring magnet, (ii) the stator assembly, which includes the YBCO disks, (iii) an incremental encoder, and (iv) the drive electronics. While the YBCO is cooling through its superconducting transition, the rotor is held above the stator by a novel hold and release mechanism. The encoder subsystem consists of a custom-built encoder disk read out by two fiber optic readout sensors. For the demonstration described in this paper, we ran the motor at 50 K and tested rotation frequencies up to approximately 10 Hz. The feedback system was able to stabilize the rotation speed to approximately 0.4%, and the measured rotor orientation angle uncertainty is less than 0.15°. Lower temperature operation will require additional development activities, which we will discuss.
Moimas, Silvia; Manasseri, Benedetto; Cuccia, Giuseppe; Stagno d'Alcontres, Francesco; Geuna, Stefano; Pattarini, Lucia; Zentilin, Lorena; Giacca, Mauro; Colonna, Michele R
2015-01-01
In regenerative medicine, new approaches are required for the creation of tissue substitutes, and the interplay between different research areas, such as tissue engineering, microsurgery and gene therapy, is mandatory. In this article, we report a modification of a published model of tissue engineering, based on an arterio-venous loop enveloped in a cross-linked collagen-glycosaminoglycan template, which acts as an isolated chamber for angiogenesis and new tissue formation. In order to foster tissue formation within the chamber, which entails on the development of new vessels, we wondered whether we might combine tissue engineering with a gene therapy approach. Based on the well-described tropism of adeno-associated viral vectors for post-mitotic tissues, a muscular flap was harvested from the pectineus muscle, inserted into the chamber and transduced by either AAV vector encoding human VEGF165 or AAV vector expressing the reporter gene β-galactosidase, as a control. Histological analysis of the specimens showed that muscle transduction by AAV vector encoding human VEGF165 resulted in enhanced tissue formation, with a significant increase in the number of arterioles within the chamber in comparison with the previously published model. Pectineus muscular flap, transduced by adeno-associated viral vectors, acted as a source of the proangiogenic factor vascular endothelial growth factor, thus inducing a consistent enhancement of vessel growth into the newly formed tissue within the chamber. In conclusion, our present findings combine three different research fields such as microsurgery, tissue engineering and gene therapy, suggesting and showing the feasibility of a mixed approach for regenerative medicine.
A Query Expansion Framework in Image Retrieval Domain Based on Local and Global Analysis
Rahman, M. M.; Antani, S. K.; Thoma, G. R.
2011-01-01
We present an image retrieval framework based on automatic query expansion in a concept feature space by generalizing the vector space model of information retrieval. In this framework, images are represented by vectors of weighted concepts similar to the keyword-based representation used in text retrieval. To generate the concept vocabularies, a statistical model is built by utilizing Support Vector Machine (SVM)-based classification techniques. The images are represented as “bag of concepts” that comprise perceptually and/or semantically distinguishable color and texture patches from local image regions in a multi-dimensional feature space. To explore the correlation between the concepts and overcome the assumption of feature independence in this model, we propose query expansion techniques in the image domain from a new perspective based on both local and global analysis. For the local analysis, the correlations between the concepts based on the co-occurrence pattern, and the metrical constraints based on the neighborhood proximity between the concepts in encoded images, are analyzed by considering local feedback information. We also analyze the concept similarities in the collection as a whole in the form of a similarity thesaurus and propose an efficient query expansion based on the global analysis. The experimental results on a photographic collection of natural scenes and a biomedical database of different imaging modalities demonstrate the effectiveness of the proposed framework in terms of precision and recall. PMID:21822350
On the Suitability of Suffix Arrays for Lempel-Ziv Data Compression
NASA Astrophysics Data System (ADS)
Ferreira, Artur J.; Oliveira, Arlindo L.; Figueiredo, Mário A. T.
Lossless compression algorithms of the Lempel-Ziv (LZ) family are widely used nowadays. Regarding time and memory requirements, LZ encoding is much more demanding than decoding. In order to speed up the encoding process, efficient data structures, like suffix trees, have been used. In this paper, we explore the use of suffix arrays to hold the dictionary of the LZ encoder, and propose an algorithm to search over it. We show that the resulting encoder attains roughly the same compression ratios as those based on suffix trees. However, the amount of memory required by the suffix array is fixed, and much lower than the variable amount of memory used by encoders based on suffix trees (which depends on the text to encode). We conclude that suffix arrays, when compared to suffix trees in terms of the trade-off among time, memory, and compression ratio, may be preferable in scenarios (e.g., embedded systems) where memory is at a premium and high speed is not critical.
Upconversion Nanoparticles-Encoded Hydrogel Microbeads-Based Multiplexed Protein Detection
NASA Astrophysics Data System (ADS)
Shikha, Swati; Zheng, Xiang; Zhang, Yong
2018-06-01
Fluorescently encoded microbeads are in demand for multiplexed applications in different fields. Compared to organic dye-based commercially available Luminex's xMAP technology, upconversion nanoparticles (UCNPs) are better alternatives due to their large anti-Stokes shift, photostability, nil background, and single wavelength excitation. Here, we developed a new multiplexed detection system using UCNPs for encoding poly(ethylene glycol) diacrylate (PEGDA) microbeads as well as for labeling reporter antibody. However, to prepare UCNPs-encoded microbeads, currently used swelling-based encapsulation leads to non-uniformity, which is undesirable for fluorescence-based multiplexing. Hence, we utilized droplet microfluidics to obtain encoded microbeads of uniform size, shape, and UCNPs distribution inside. Additionally, PEGDA microbeads lack functionality for probe antibodies conjugation on their surface. Methods to functionalize the surface of PEGDA microbeads (acrylic acid incorporation, polydopamine coating) reported thus far quench the fluorescence of UCNPs. Here, PEGDA microbeads surface was coated with silica followed by carboxyl modification without compromising the fluorescence intensity of UCNPs. In this study, droplet microfluidics-assisted UCNPs-encoded microbeads of uniform shape, size, and fluorescence were prepared. Multiple color codes were generated by mixing UCNPs emitting red and green colors at different ratios prior to encapsulation. UCNPs emitting blue color were used to label the reporter antibody. Probe antibodies were covalently immobilized on red UCNPs-encoded microbeads for specific capture of human serum albumin (HSA) as a model protein. The system was also demonstrated for multiplexed detection of both human C-reactive protein (hCRP) and HSA protein by immobilizing anti-hCRP antibodies on green UCNPs.
Automatic reactor model synthesis with genetic programming.
Dürrenmatt, David J; Gujer, Willi
2012-01-01
Successful modeling of wastewater treatment plant (WWTP) processes requires an accurate description of the plant hydraulics. Common methods such as tracer experiments are difficult and costly and thus have limited applicability in practice; engineers are often forced to rely on their experience only. An implementation of grammar-based genetic programming with an encoding to represent hydraulic reactor models as program trees should fill this gap: The encoding enables the algorithm to construct arbitrary reactor models compatible with common software used for WWTP modeling by linking building blocks, such as continuous stirred-tank reactors. Discharge measurements and influent and effluent concentrations are the only required inputs. As shown in a synthetic example, the technique can be used to identify a set of reactor models that perform equally well. Instead of being guided by experience, the most suitable model can now be chosen by the engineer from the set. In a second example, temperature measurements at the influent and effluent of a primary clarifier are used to generate a reactor model. A virtual tracer experiment performed on the reactor model has good agreement with a tracer experiment performed on-site.
Molecular characterization of southern bluefin tuna myoglobin (Thunnus maccoyii).
Nurilmala, Mala; Ochiai, Yoshihiro
2016-10-01
The primary structure of southern bluefin tuna Thunnus maccoyii Mb has been elucidated by molecular cloning techniques. The cDNA of this tuna encoding Mb contained 776 nucleotides, with an open reading frame of 444 nucleotides encoding 147 amino acids. The nucleotide sequence of the coding region was identical to those of other bluefin tunas (T. thynnus and T. orientalis), thus giving the same amino acid sequences. Based on the deduced amino acid sequence, bioinformatic analysis was performed including phylogenic tree, hydropathy plot and homology modeling. In order to investigate the autoxidation profiles, the isolation of Mb was performed from the dark muscle. The water soluble fraction was subjected to ammonium sulfate fractionation (60-90 % saturation) followed by preparative gel electrophoresis. Autoxidation profiles of Mb were delineated at pH 5.6, 6.5 and 7.4 at temperature 37 °C. The autoxidation rate of tuna Mb was slightly higher than that of horse Mb at all pH examined. These results revealed that tuna myoglobin was unstable than that of horse Mb mainly at acidic pH.
Yi, Faliu; Jeoung, Yousun; Moon, Inkyu
2017-05-20
In recent years, many studies have focused on authentication of two-dimensional (2D) images using double random phase encryption techniques. However, there has been little research on three-dimensional (3D) imaging systems, such as integral imaging, for 3D image authentication. We propose a 3D image authentication scheme based on a double random phase integral imaging method. All of the 2D elemental images captured through integral imaging are encrypted with a double random phase encoding algorithm and only partial phase information is reserved. All the amplitude and other miscellaneous phase information in the encrypted elemental images is discarded. Nevertheless, we demonstrate that 3D images from integral imaging can be authenticated at different depths using a nonlinear correlation method. The proposed 3D image authentication algorithm can provide enhanced information security because the decrypted 2D elemental images from the sparse phase cannot be easily observed by the naked eye. Additionally, using sparse phase images without any amplitude information can greatly reduce data storage costs and aid in image compression and data transmission.
Hu, Jian; Zhou, Yi-ren; Ding, Jia-lin; Wang, Zhi-yuan; Liu, Ling; Wang, Ye-kai; Lou, Hui-ling; Qiao, Shou-yi; Wu, Yan-hua
2017-05-20
The ABO blood type is one of the most common and widely used genetic traits in humans. Three glycosyltransferase-encoding gene alleles, I A , I B and i, produce three red blood cell surface antigens, by which the ABO blood type is classified. By using the ABO blood type experiment as an ideal case for genetics teaching, we can easily introduce to the students several genetic concepts, including multiple alleles, gene interaction, single nucleotide polymorphism (SNP) and gene evolution. Herein we have innovated and integrated our ABO blood type genetics experiments. First, in the section of Molecular Genetics, a new method of ABO blood genotyping was established: specific primers based on SNP sites were designed to distinguish three alleles through quantitative real-time PCR. Next, the experimental teaching method of Gene Evolution was innovated in the Population Genetics section: a gene-evolution software was developed to simulate the evolutionary tendency of the ABO genotype encoding alleles under diverse conditions. Our reform aims to extend the contents of genetics experiments, to provide additional teaching approaches, and to improve the learning efficiency of our students eventually.
Lower Parietal Encoding Activation Is Associated with Sharper Information and Better Memory.
Lee, Hongmi; Chun, Marvin M; Kuhl, Brice A
2017-04-01
Mean fMRI activation in ventral posterior parietal cortex (vPPC) during memory encoding often negatively predicts successful remembering. A popular interpretation of this phenomenon is that vPPC reflects "off-task" processing. However, recent fMRI studies considering distributed patterns of activity suggest that vPPC actively represents encoded material. Here, we assessed the relationships between pattern-based content representations in vPPC, mean activation in vPPC, and subsequent remembering. We analyzed data from two fMRI experiments where subjects studied then recalled word-face or word-scene associations. For each encoding trial, we measured 1) mean univariate activation within vPPC and 2) the strength of face/scene information as indexed by pattern analysis. Mean activation in vPPC negatively predicted subsequent remembering, but the strength of pattern-based information in the same vPPC voxels positively predicted later memory. Indeed, univariate amplitude averaged across vPPC voxels negatively correlated with pattern-based information strength. This dissociation reflected a tendency for univariate reductions to maximally occur in voxels that were not strongly tuned for the category of encoded stimuli. These results indicate that vPPC activity patterns reflect the content and quality of memory encoding and constitute a striking example of lower univariate activity corresponding to stronger pattern-based information. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
PCR cloning and characterization of multiple ADP-glucose pyrophosphorylase cDNAs from tomato
NASA Technical Reports Server (NTRS)
Chen, B. Y.; Janes, H. W.; Gianfagna, T.
1998-01-01
Four ADP-glucose pyrophosphorylase (AGP) cDNAs were cloned from tomato fruit and leaves by the PCR techniques. Three of them (agp S1, agp S2, and agp S3) encode the large subunit of AGP, the fourth one (agp B) encodes the small subunit. The deduced amino acid sequences of the cDNAs show very high identities (96-98%) to the corresponding potato AGP isoforms, although there are major differences in tissue expression profiles. All four tomato AGP transcripts were detected in fruit and leaves; the predominant ones in fruit are agp B and agp S1, whereas in leaves they are agp B and agp S3. Genomic southern analysis suggests that the four AGP transcripts are encoded by distinct genes.
Formosa, Luke E; Hofer, Annette; Tischner, Christin; Wenz, Tina; Ryan, Michael T
2016-01-01
In higher eukaryotes, the mitochondrial electron transport chain consists of five multi-subunit membrane complexes responsible for the generation of cellular ATP. Of these, four complexes are under dual genetic control as they contain subunits encoded by both the mitochondrial and nuclear genomes, thereby adding another layer of complexity to the puzzle of respiratory complex biogenesis. These subunits must be synthesized and assembled in a coordinated manner in order to ensure correct biogenesis of different respiratory complexes. Here, we describe techniques to (1) specifically radiolabel proteins encoded by mtDNA to monitor the rate of synthesis using pulse labeling methods, and (2) analyze the stability, assembly, and turnover of subunits using pulse-chase methods in cultured cells and isolated mitochondria.
Automatic Generation of Heuristics for Scheduling
NASA Technical Reports Server (NTRS)
Morris, Robert A.; Bresina, John L.; Rodgers, Stuart M.
1997-01-01
This paper presents a technique, called GenH, that automatically generates search heuristics for scheduling problems. The impetus for developing this technique is the growing consensus that heuristics encode advice that is, at best, useful in solving most, or typical, problem instances, and, at worst, useful in solving only a narrowly defined set of instances. In either case, heuristic problem solvers, to be broadly applicable, should have a means of automatically adjusting to the idiosyncrasies of each problem instance. GenH generates a search heuristic for a given problem instance by hill-climbing in the space of possible multi-attribute heuristics, where the evaluation of a candidate heuristic is based on the quality of the solution found under its guidance. We present empirical results obtained by applying GenH to the real world problem of telescope observation scheduling. These results demonstrate that GenH is a simple and effective way of improving the performance of an heuristic scheduler.
The SeaHorn Verification Framework
NASA Technical Reports Server (NTRS)
Gurfinkel, Arie; Kahsai, Temesghen; Komuravelli, Anvesh; Navas, Jorge A.
2015-01-01
In this paper, we present SeaHorn, a software verification framework. The key distinguishing feature of SeaHorn is its modular design that separates the concerns of the syntax of the programming language, its operational semantics, and the verification semantics. SeaHorn encompasses several novelties: it (a) encodes verification conditions using an efficient yet precise inter-procedural technique, (b) provides flexibility in the verification semantics to allow different levels of precision, (c) leverages the state-of-the-art in software model checking and abstract interpretation for verification, and (d) uses Horn-clauses as an intermediate language to represent verification conditions which simplifies interfacing with multiple verification tools based on Horn-clauses. SeaHorn provides users with a powerful verification tool and researchers with an extensible and customizable framework for experimenting with new software verification techniques. The effectiveness and scalability of SeaHorn are demonstrated by an extensive experimental evaluation using benchmarks from SV-COMP 2015 and real avionics code.
Necessary conditions for the optimality of variable rate residual vector quantizers
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.
1993-01-01
Residual vector quantization (RVQ), or multistage VQ, as it is also called, has recently been shown to be a competitive technique for data compression. The competitive performance of RVQ reported in results from the joint optimization of variable rate encoding and RVQ direct-sum code books. In this paper, necessary conditions for the optimality of variable rate RVQ's are derived, and an iterative descent algorithm based on a Lagrangian formulation is introduced for designing RVQ's having minimum average distortion subject to an entropy constraint. Simulation results for these entropy-constrained RVQ's (EC-RVQ's) are presented for memory less Gaussian, Laplacian, and uniform sources. A Gauss-Markov source is also considered. The performance is superior to that of entropy-constrained scalar quantizers (EC-SQ's) and practical entropy-constrained vector quantizers (EC-VQ's), and is competitive with that of some of the best source coding techniques that have appeared in the literature.
Adaptive distributed source coding.
Varodayan, David; Lin, Yao-Chung; Girod, Bernd
2012-05-01
We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.
Automated novel high-accuracy miniaturized positioning system for use in analytical instrumentation
NASA Astrophysics Data System (ADS)
Siomos, Konstadinos; Kaliakatsos, John; Apostolakis, Manolis; Lianakis, John; Duenow, Peter
1996-01-01
The development of three-dimensional automotive devices (micro-robots) for applications in analytical instrumentation, clinical chemical diagnostics and advanced laser optics, depends strongly on the ability of such a device: firstly to be positioned with high accuracy, reliability, and automatically, by means of user friendly interface techniques; secondly to be compact; and thirdly to operate under vacuum conditions, free of most of the problems connected with conventional micropositioners using stepping-motor gear techniques. The objective of this paper is to develop and construct a mechanically compact computer-based micropositioning system for coordinated motion in the X-Y-Z directions with: (1) a positioning accuracy of less than 1 micrometer, (the accuracy of the end-position of the system is controlled by a hard/software assembly using a self-constructed optical encoder); (2) a heat-free propulsion mechanism for vacuum operation; and (3) synchronized X-Y motion.
Four-dimensional modulation and coding: An alternate to frequency-reuse
NASA Technical Reports Server (NTRS)
Wilson, S. G.; Sleeper, H. A.
1983-01-01
Four dimensional modulation as a means of improving communication efficiency on the band-limited Gaussian channel, with the four dimensions of signal space constituted by phase orthogonal carriers (cos omega sub c t and sin omega sub c t) simultaneously on space orthogonal electromagnetic waves are discussed. "Frequency reuse' techniques use such polarization orthogonality to reuse the same frequency slot, but the modulation is not treated as four dimensional, rather a product of two-d modulations, e.g., QPSK. It is well known that, higher dimensionality signalling affords possible improvements in the power bandwidth sense. Four-D modulations based upon subsets of lattice-packings in four-D, which afford simplification of encoding and decoding are described. Sets of up to 1024 signals are constructed in four-D, providing a (Nyquist) spectral efficiency of up to 10 bps/Hz. Energy gains over the reuse technique are in the one to three dB range t equal bandwidth.
Four-dimensional modulation and coding - An alternate to frequency-reuse
NASA Technical Reports Server (NTRS)
Wilson, S. G.; Sleeper, H. A.; Srinath, N. K.
1984-01-01
Four dimensional modulation as a means of improving communication efficiency on the band-limited Gaussian channel, with the four dimensions of signal space constituted by phase orthogonal carriers (cos omega sub c t and sin omega sub c t) simultaneously on space orthogonal electromagnetic waves are discussed. 'Frequency reuse' techniques use such polarization orthogonality to reuse the same frequency slot, but the modulation is not treated as four dimensional, rather a product of two-D modulations, e.g., QPSK. It is well known that, higher dimensionality signalling affords possible improvements in the power bandwidth sense. Four-D modulations based upon subsets of lattice-packings in four-D, which afford simplification of encoding and decoding are described. Sets of up to 1024 signals are constructed in four-D, providing a (Nyquist) spectral efficiency of up to 10 bps/Hz. Energy gains over the reuse technique are in the one to three dB range t equal bandwidth.
Nie, Min; Ren, Jie; Li, Zhengjun; Niu, Jinhai; Qiu, Yihong; Zhu, Yisheng; Tong, Shanbao
2009-01-01
Without visual information, the blind people live in various hardships with shopping, reading, finding objects and etc. Therefore, we developed a portable auditory guide system, called SoundView, for visually impaired people. This prototype system consists of a mini-CCD camera, a digital signal processing unit and an earphone, working with built-in customizable auditory coding algorithms. Employing environment understanding techniques, SoundView processes the images from a camera and detects objects tagged with barcodes. The recognized objects in the environment are then encoded into stereo speech signals for the blind though an earphone. The user would be able to recognize the type, motion state and location of the interested objects with the help of SoundView. Compared with other visual assistant techniques, SoundView is object-oriented and has the advantages of cheap cost, smaller size, light weight, low power consumption and easy customization.
Imaging of Biological Tissues by Visible Light CDI
NASA Astrophysics Data System (ADS)
Karpov, Dmitry; Dos Santos Rolo, Tomy; Rich, Hannah; Fohtung, Edwin
Recent advances in the use of synchrotron and X-ray free electron laser (XFEL) based coherent diffraction imaging (CDI) with application to material sciences and medicine proved the technique to be efficient in recovering information about the samples encoded in the phase domain. The current state-of-the-art algorithms of reconstruction are transferable to optical frequencies, which makes laser sources a reasonable milestone both in technique development and applications. Here we present first results from table-top laser CDI system for imaging of biological tissues and reconstruction algorithms development and discuss approaches that are complimenting the data quality improvement that is applicable to visible light frequencies due to it's properties. We demonstrate applicability of the developed methodology to a wide class of soft bio-matter and condensed matter systems. This project is funded by DOD-AFOSR under Award No FA9550-14-1-0363 and the LANSCE Professorship at LANL.
Ferreri, Laura; Bigand, Emmanuel; Bard, Patrick; Bugaiska, Aurélia
2015-01-01
Music can be thought of as a complex stimulus able to enrich the encoding of an event thus boosting its subsequent retrieval. However, several findings suggest that music can also interfere with memory performance. A better understanding of the behavioral and neural processes involved can substantially improve knowledge and shed new light on the most efficient music-based interventions. Based on fNIRS studies on music, episodic encoding, and the dorsolateral prefrontal cortex (PFC), this work aims to extend previous findings by monitoring the entire lateral PFC during both encoding and retrieval of verbal material. Nineteen participants were asked to encode lists of words presented with either background music or silence and subsequently tested during a free recall task. Meanwhile, their PFC was monitored using a 48-channel fNIRS system. Behavioral results showed greater chunking of words under the music condition, suggesting the employment of associative strategies for items encoded with music. fNIRS results showed that music provided a less demanding way of modulating both episodic encoding and retrieval, with a general prefrontal decreased activity under the music versus silence condition. This suggests that music-related memory processes rely on specific neural mechanisms and that music can positively influence both episodic encoding and retrieval of verbal information. PMID:26508813
Ferreri, Laura; Bigand, Emmanuel; Bard, Patrick; Bugaiska, Aurélia
2015-01-01
Music can be thought of as a complex stimulus able to enrich the encoding of an event thus boosting its subsequent retrieval. However, several findings suggest that music can also interfere with memory performance. A better understanding of the behavioral and neural processes involved can substantially improve knowledge and shed new light on the most efficient music-based interventions. Based on fNIRS studies on music, episodic encoding, and the dorsolateral prefrontal cortex (PFC), this work aims to extend previous findings by monitoring the entire lateral PFC during both encoding and retrieval of verbal material. Nineteen participants were asked to encode lists of words presented with either background music or silence and subsequently tested during a free recall task. Meanwhile, their PFC was monitored using a 48-channel fNIRS system. Behavioral results showed greater chunking of words under the music condition, suggesting the employment of associative strategies for items encoded with music. fNIRS results showed that music provided a less demanding way of modulating both episodic encoding and retrieval, with a general prefrontal decreased activity under the music versus silence condition. This suggests that music-related memory processes rely on specific neural mechanisms and that music can positively influence both episodic encoding and retrieval of verbal information.
Discovering Recurring Anomalies in Text Reports Regarding Complex Space Systems
NASA Technical Reports Server (NTRS)
Zane-Ulman, Brett; Srivastava, Ashok N.
2005-01-01
Many existing complex space systems have a significant amount of historical maintenance and problem data bases that are stored in unstructured text forms. For some platforms, these reports may be encoded as scanned images rather than even searchable text. The problem that we address in this paper is the discovery of recurring anomalies and relationships between different problem reports that may indicate larger systemic problems. We will illustrate our techniques on data from discrepancy reports regarding software anomalies in the Space Shuttle. These free text reports are written by a number of different penp!e, thus the emphasis and wording varies considerably.
A hybrid modulation for the dissemination of weather data to aircraft
NASA Technical Reports Server (NTRS)
Akos, Dennis M.
1991-01-01
Ohio University is continuing to conduct research to improve its system for weather data dissemination to aircraft. The current experimental system transmit compressed weather radar reflectivity patterns from a ground based station to aircraft. Although an effective system, the limited frequency spectrum does not provide a channel for transmission. This introduces the idea of a hybrid modulation. The hybrid technique encodes weather data using phase modulation (PM) onto an existing aeronautical channel which employs amplitude modulation (AM) for voice signal transmission. Ideally, the two modulations are independent of one another. The planned implementation and basis of the system are the reviewed.
Variable disparity-motion estimation based fast three-view video coding
NASA Astrophysics Data System (ADS)
Bae, Kyung-Hoon; Kim, Seung-Cheol; Hwang, Yong Seok; Kim, Eun-Soo
2009-02-01
In this paper, variable disparity-motion estimation (VDME) based 3-view video coding is proposed. In the encoding, key-frame coding (KFC) based motion estimation and variable disparity estimation (VDE) for effectively fast three-view video encoding are processed. These proposed algorithms enhance the performance of 3-D video encoding/decoding system in terms of accuracy of disparity estimation and computational overhead. From some experiments, stereo sequences of 'Pot Plant' and 'IVO', it is shown that the proposed algorithm's PSNRs is 37.66 and 40.55 dB, and the processing time is 0.139 and 0.124 sec/frame, respectively.
Trinary signed-digit arithmetic using an efficient encoding scheme
NASA Astrophysics Data System (ADS)
Salim, W. Y.; Alam, M. S.; Fyath, R. S.; Ali, S. A.
2000-09-01
The trinary signed-digit (TSD) number system is of interest for ultrafast optoelectronic computing systems since it permits parallel carry-free addition and borrow-free subtraction of two arbitrary length numbers in constant time. In this paper, a simple coding scheme is proposed to encode the decimal number directly into the TSD form. The coding scheme enables one to perform parallel one-step TSD arithmetic operation. The proposed coding scheme uses only a 5-combination coding table instead of the 625-combination table reported recently for recoded TSD arithmetic technique.
One-step trinary signed-digit arithmetic using an efficient encoding scheme
NASA Astrophysics Data System (ADS)
Salim, W. Y.; Fyath, R. S.; Ali, S. A.; Alam, Mohammad S.
2000-11-01
The trinary signed-digit (TSD) number system is of interest for ultra fast optoelectronic computing systems since it permits parallel carry-free addition and borrow-free subtraction of two arbitrary length numbers in constant time. In this paper, a simple coding scheme is proposed to encode the decimal number directly into the TSD form. The coding scheme enables one to perform parallel one-step TSD arithmetic operation. The proposed coding scheme uses only a 5-combination coding table instead of the 625-combination table reported recently for recoded TSD arithmetic technique.
Doppler imaging using spectrally-encoded endoscopy
Yelin, Dvir; Bouma, B. E.; Rosowsky, J. J.; Tearney, G. J.
2009-01-01
The capability to image tissue motion such as blood flow through an endoscope could have many applications in medicine. Spectrally encoded endoscopy (SEE) is a recently introduced technique that utilizes a single optical fiber and miniature diffractive optics to obtain endoscopic images through small diameter probes. Using spectral-domain interferometry, SEE is furthermore capable of three-dimensional volume imaging at video rates. Here we show that by measuring relative spectral phases, this technology can additionally measure Doppler shifts. Doppler SEE is demonstrated in flowing Intralipid phantoms and vibrating middle ear ossicles. PMID:18795020
Respiratory motion resolved, self-gated 4D-MRI using Rotating Cartesian K-space (ROCK)
Han, Fei; Zhou, Ziwu; Cao, Minsong; Yang, Yingli; Sheng, Ke; Hu, Peng
2017-01-01
Purpose To propose and validate a respiratory motion resolved, self-gated (SG) 4D-MRI technique to assess patient-specific breathing motion of abdominal organs for radiation treatment planning. Methods The proposed 4D-MRI technique was based on the balanced steady-state free-precession (bSSFP) technique and 3D k-space encoding. A novel ROtating Cartesian K-space (ROCK) reordering method was designed that incorporates repeatedly sampled k-space centerline as the SG motion surrogate and allows for retrospective k-space data binning into different respiratory positions based on the amplitude of the surrogate. The multiple respiratory-resolved 3D k-space data were subsequently reconstructed using a joint parallel imaging and compressed sensing method with spatial and temporal regularization. The proposed 4D-MRI technique was validated using a custom-made dynamic motion phantom and was tested in 6 healthy volunteers, in whom quantitative diaphragm and kidney motion measurements based on 4D-MRI images were compared with those based on 2D-CINE images. Results The 5-minute 4D-MRI scan offers high-quality volumetric images in 1.2×1.2×1.6mm3 and 8 respiratory positions, with good soft-tissue contrast. In phantom experiments with triangular motion waveform, the motion amplitude measurements based on 4D-MRI were 11.89% smaller than the ground truth, whereas a −12.5% difference was expected due to data binning effects. In healthy volunteers, the difference between the measurements based on 4D-MRI and the ones based on 2D-CINE were 6.2±4.5% for the diaphragm, 8.2±4.9% and 8.9±5.1% for the right and left kidney. Conclusion The proposed 4D-MRI technique could provide high resolution, high quality, respiratory motion resolved 4D images with good soft-tissue contrast and are free of the “stitching” artifacts usually seen on 4D-CT and 4D-MRI based on resorting 2D-CINE. It could be used to visualize and quantify abdominal organ motion for MRI-based radiation treatment planning. PMID:28133752
Respiratory motion-resolved, self-gated 4D-MRI using rotating cartesian k-space (ROCK).
Han, Fei; Zhou, Ziwu; Cao, Minsong; Yang, Yingli; Sheng, Ke; Hu, Peng
2017-04-01
To propose and validate a respiratory motion resolved, self-gated (SG) 4D-MRI technique to assess patient-specific breathing motion of abdominal organs for radiation treatment planning. The proposed 4D-MRI technique was based on the balanced steady-state free-precession (bSSFP) technique and 3D k-space encoding. A novel rotating cartesian k-space (ROCK) reordering method was designed which incorporates repeatedly sampled k-space centerline as the SG motion surrogate and allows for retrospective k-space data binning into different respiratory positions based on the amplitude of the surrogate. The multiple respiratory-resolved 3D k-space data were subsequently reconstructed using a joint parallel imaging and compressed sensing method with spatial and temporal regularization. The proposed 4D-MRI technique was validated using a custom-made dynamic motion phantom and was tested in six healthy volunteers, in whom quantitative diaphragm and kidney motion measurements based on 4D-MRI images were compared with those based on 2D-CINE images. The 5-minute 4D-MRI scan offers high-quality volumetric images in 1.2 × 1.2 × 1.6 mm 3 and eight respiratory positions, with good soft-tissue contrast. In phantom experiments with triangular motion waveform, the motion amplitude measurements based on 4D-MRI were 11.89% smaller than the ground truth, whereas a -12.5% difference was expected due to data binning effects. In healthy volunteers, the difference between the measurements based on 4D-MRI and the ones based on 2D-CINE were 6.2 ± 4.5% for the diaphragm, 8.2 ± 4.9% and 8.9 ± 5.1% for the right and left kidney. The proposed 4D-MRI technique could provide high-resolution, high-quality, respiratory motion-resolved 4D images with good soft-tissue contrast and are free of the "stitching" artifacts usually seen on 4D-CT and 4D-MRI based on resorting 2D-CINE. It could be used to visualize and quantify abdominal organ motion for MRI-based radiation treatment planning. © 2017 American Association of Physicists in Medicine.
Data transmission system and method
NASA Technical Reports Server (NTRS)
Bruck, Jehoshua (Inventor); Langberg, Michael (Inventor); Sprintson, Alexander (Inventor)
2010-01-01
A method of transmitting data packets, where randomness is added to the schedule. Universal broadcast schedules using encoding and randomization techniques are also discussed, together with optimal randomized schedules and an approximation algorithm for finding near-optimal schedules.
Fast MPEG-CDVS Encoder With GPU-CPU Hybrid Computing
NASA Astrophysics Data System (ADS)
Duan, Ling-Yu; Sun, Wei; Zhang, Xinfeng; Wang, Shiqi; Chen, Jie; Yin, Jianxiong; See, Simon; Huang, Tiejun; Kot, Alex C.; Gao, Wen
2018-05-01
The compact descriptors for visual search (CDVS) standard from ISO/IEC moving pictures experts group (MPEG) has succeeded in enabling the interoperability for efficient and effective image retrieval by standardizing the bitstream syntax of compact feature descriptors. However, the intensive computation of CDVS encoder unfortunately hinders its widely deployment in industry for large-scale visual search. In this paper, we revisit the merits of low complexity design of CDVS core techniques and present a very fast CDVS encoder by leveraging the massive parallel execution resources of GPU. We elegantly shift the computation-intensive and parallel-friendly modules to the state-of-the-arts GPU platforms, in which the thread block allocation and the memory access are jointly optimized to eliminate performance loss. In addition, those operations with heavy data dependence are allocated to CPU to resolve the extra but non-necessary computation burden for GPU. Furthermore, we have demonstrated the proposed fast CDVS encoder can work well with those convolution neural network approaches which has harmoniously leveraged the advantages of GPU platforms, and yielded significant performance improvements. Comprehensive experimental results over benchmarks are evaluated, which has shown that the fast CDVS encoder using GPU-CPU hybrid computing is promising for scalable visual search.
Leroch, Michaela; Mernke, Dennis; Koppenhoefer, Dieter; Schneider, Prisca; Mosbach, Andreas; Doehlemann, Gunther; Hahn, Matthias
2011-05-01
The green fluorescent protein (GFP) and its variants have been widely used in modern biology as reporters that allow a variety of live-cell imaging techniques. So far, GFP has rarely been used in the gray mold fungus Botrytis cinerea because of low fluorescence intensity. The codon usage of B. cinerea genes strongly deviates from that of commonly used GFP-encoding genes and reveals a lower GC content than other fungi. In this study, we report the development and use of a codon-optimized version of the B. cinerea enhanced GFP (eGFP)-encoding gene (Bcgfp) for improved expression in B. cinerea. Both the codon optimization and, to a smaller extent, the insertion of an intron resulted in higher mRNA levels and increased fluorescence. Bcgfp was used for localization of nuclei in germinating spores and for visualizing host penetration. We further demonstrate the use of promoter-Bcgfp fusions for quantitative evaluation of various toxic compounds as inducers of the atrB gene encoding an ABC-type drug efflux transporter of B. cinerea. In addition, a codon-optimized mCherry-encoding gene was constructed which yielded bright red fluorescence in B. cinerea.
Whole-heart chemical shift encoded water-fat MRI.
Taviani, Valentina; Hernando, Diego; Francois, Christopher J; Shimakawa, Ann; Vigen, Karl K; Nagle, Scott K; Schiebler, Mark L; Grist, Thomas M; Reeder, Scott B
2014-09-01
To develop and evaluate a free-breathing chemical-shift-encoded (CSE) spoiled gradient-recalled echo (SPGR) technique for whole-heart water-fat imaging at 3 Tesla (T). We developed a three-dimensional (3D) multi-echo SPGR pulse sequence with electrocardiographic gating and navigator echoes and evaluated its performance at 3T in healthy volunteers (N = 6) and patients (N = 20). CSE-SPGR, 3D SPGR, and 3D balanced-SSFP with chemical fat saturation were compared in six healthy subjects with images evaluated for overall image quality, level of residual artifacts, and quality of fat suppression. A similar scoring system was used for the patient datasets. Images of diagnostic quality were acquired in all but one subject. CSE-SPGR performed similarly to SPGR with fat saturation, although it provided a more uniform fat suppression over the whole field of view. Balanced-SSFP performed worse than SPGR-based methods. In patients, CSE-SPGR produced excellent fat suppression near metal. Overall image quality was either good (7/20) or excellent (12/20) in all but one patient. There were significant artifacts in 5/20 clinical cases. CSE-SPGR is a promising technique for whole-heart water-fat imaging during free-breathing. The robust fat suppression in the water-only image could improve assessment of complex morphology at 3T and in the presence of off-resonance, with additional information contained in the fat-only image. Copyright © 2013 Wiley Periodicals, Inc.
Monitoring Syllable Boundaries during Speech Production
ERIC Educational Resources Information Center
Jansma, Bernadette M.; Schiller, Niels O.
2004-01-01
This study investigated the encoding of syllable boundary information during speech production in Dutch. Based on Levelt's model of phonological encoding, we hypothesized segments and syllable boundaries to be encoded in an incremental way. In a self-monitoring experiment, decisions about the syllable affiliation (first or second syllable) of a…
NASA Astrophysics Data System (ADS)
Gurkovskiy, B. V.; Zhuravlev, B. V.; Onishchenko, E. M.; Simakov, A. B.; Trifonova, N. Yu; Voronov, Yu A.
2016-10-01
New instrumental technique for research of the psycho-physiological reactions of the bio-objects under the microwave electromagnetic radiation, modulated by interval patterns of neural activity in the brain registered under different biological motivations, are suggested. The preliminary results of these new tool tests in real psycho physiological experiments on rats are presented.
Design of UAV high resolution image transmission system
NASA Astrophysics Data System (ADS)
Gao, Qiang; Ji, Ming; Pang, Lan; Jiang, Wen-tao; Fan, Pengcheng; Zhang, Xingcheng
2017-02-01
In order to solve the problem of the bandwidth limitation of the image transmission system on UAV, a scheme with image compression technology for mini UAV is proposed, based on the requirements of High-definition image transmission system of UAV. The video codec standard H.264 coding module and key technology was analyzed and studied for UAV area video communication. Based on the research of high-resolution image encoding and decoding technique and wireless transmit method, The high-resolution image transmission system was designed on architecture of Android and video codec chip; the constructed system was confirmed by experimentation in laboratory, the bit-rate could be controlled easily, QoS is stable, the low latency could meets most applied requirement not only for military use but also for industrial applications.
Mutual information-based facial expression recognition
NASA Astrophysics Data System (ADS)
Hazar, Mliki; Hammami, Mohamed; Hanêne, Ben-Abdallah
2013-12-01
This paper introduces a novel low-computation discriminative regions representation for expression analysis task. The proposed approach relies on interesting studies in psychology which show that most of the descriptive and responsible regions for facial expression are located around some face parts. The contributions of this work lie in the proposition of new approach which supports automatic facial expression recognition based on automatic regions selection. The regions selection step aims to select the descriptive regions responsible or facial expression and was performed using Mutual Information (MI) technique. For facial feature extraction, we have applied Local Binary Patterns Pattern (LBP) on Gradient image to encode salient micro-patterns of facial expressions. Experimental studies have shown that using discriminative regions provide better results than using the whole face regions whilst reducing features vector dimension.
Breast tumor malignancy modelling using evolutionary neural logic networks.
Tsakonas, Athanasios; Dounias, Georgios; Panagi, Georgia; Panourgias, Evangelia
2006-01-01
The present work proposes a computer assisted methodology for the effective modelling of the diagnostic decision for breast tumor malignancy. The suggested approach is based on innovative hybrid computational intelligence algorithms properly applied in related cytological data contained in past medical records. The experimental data used in this study were gathered in the early 1990s in the University of Wisconsin, based in post diagnostic cytological observations performed by expert medical staff. Data were properly encoded in a computer database and accordingly, various alternative modelling techniques were applied on them, in an attempt to form diagnostic models. Previous methods included standard optimisation techniques, as well as artificial intelligence approaches, in a way that a variety of related publications exists in modern literature on the subject. In this report, a hybrid computational intelligence approach is suggested, which effectively combines modern mathematical logic principles, neural computation and genetic programming in an effective manner. The approach proves promising either in terms of diagnostic accuracy and generalization capabilities, or in terms of comprehensibility and practical importance for the related medical staff.
On describing human white matter anatomy: the white matter query language.
Wassermann, Demian; Makris, Nikos; Rathi, Yogesh; Shenton, Martha; Kikinis, Ron; Kubicki, Marek; Westin, Carl-Fredrik
2013-01-01
The main contribution of this work is the careful syntactical definition of major white matter tracts in the human brain based on a neuroanatomist's expert knowledge. We present a technique to formally describe white matter tracts and to automatically extract them from diffusion MRI data. The framework is based on a novel query language with a near-to-English textual syntax. This query language allows us to construct a dictionary of anatomical definitions describing white matter tracts. The definitions include adjacent gray and white matter regions, and rules for spatial relations. This enables automated coherent labeling of white matter anatomy across subjects. We use our method to encode anatomical knowledge in human white matter describing 10 association and 8 projection tracts per hemisphere and 7 commissural tracts. The technique is shown to be comparable in accuracy to manual labeling. We present results applying this framework to create a white matter atlas from 77 healthy subjects, and we use this atlas in a proof-of-concept study to detect tract changes specific to schizophrenia.
Secure steganography designed for mobile platforms
NASA Astrophysics Data System (ADS)
Agaian, Sos S.; Cherukuri, Ravindranath; Sifuentes, Ronnie R.
2006-05-01
Adaptive steganography, an intelligent approach to message hiding, integrated with matrix encoding and pn-sequences serves as a promising resolution to recent security assurance concerns. Incorporating the above data hiding concepts with established cryptographic protocols in wireless communication would greatly increase the security and privacy of transmitting sensitive information. We present an algorithm which will address the following problems: 1) low embedding capacity in mobile devices due to fixed image dimensions and memory constraints, 2) compatibility between mobile and land based desktop computers, and 3) detection of stego images by widely available steganalysis software [1-3]. Consistent with the smaller available memory, processor capabilities, and limited resolution associated with mobile devices, we propose a more magnified approach to steganography by focusing adaptive efforts at the pixel level. This deeper method, in comparison to the block processing techniques commonly found in existing adaptive methods, allows an increase in capacity while still offering a desired level of security. Based on computer simulations using high resolution, natural imagery and mobile device captured images, comparisons show that the proposed method securely allows an increased amount of embedding capacity but still avoids detection by varying steganalysis techniques.
Supervised Learning Based on Temporal Coding in Spiking Neural Networks.
Mostafa, Hesham
2017-08-01
Gradient descent training techniques are remarkably successful in training analog-valued artificial neural networks (ANNs). Such training techniques, however, do not transfer easily to spiking networks due to the spike generation hard nonlinearity and the discrete nature of spike communication. We show that in a feedforward spiking network that uses a temporal coding scheme where information is encoded in spike times instead of spike rates, the network input-output relation is differentiable almost everywhere. Moreover, this relation is piecewise linear after a transformation of variables. Methods for training ANNs thus carry directly to the training of such spiking networks as we show when training on the permutation invariant MNIST task. In contrast to rate-based spiking networks that are often used to approximate the behavior of ANNs, the networks we present spike much more sparsely and their behavior cannot be directly approximated by conventional ANNs. Our results highlight a new approach for controlling the behavior of spiking networks with realistic temporal dynamics, opening up the potential for using these networks to process spike patterns with complex temporal information.
Tracking brain motion during the cardiac cycle using spiral cine-DENSE MRI
Zhong, Xiaodong; Meyer, Craig H.; Schlesinger, David J.; Sheehan, Jason P.; Epstein, Frederick H.; Larner, James M.; Benedict, Stanley H.; Read, Paul W.; Sheng, Ke; Cai, Jing
2009-01-01
Cardiac-synchronized brain motion is well documented, but the accurate measurement of such motion on the pixel-by-pixel basis has been hampered by the lack of proper imaging technique. In this article, the authors present the implementation of an autotracking spiral cine displacement-encoded stimulation echo (DENSE) magnetic resonance imaging (MRI) technique for the measurement of pulsatile brain motion during the cardiac cycle. Displacement-encoded dynamic MR images of three healthy volunteers were acquired throughout the cardiac cycle using the spiral cine-DENSE pulse sequence gated to the R wave of an electrocardiogram. Pixelwise Lagrangian displacement maps were computed, and 2D displacement as a function of time was determined for selected regions of interests. Different intracranial structures exhibited characteristic motion amplitude, direction, and pattern throughout the cardiac cycle. Time-resolved displacement curves revealed the pathway of pulsatile motion from brain stem to peripheral brain lobes. These preliminary results demonstrated that the spiral cine-DENSE MRI technique can be used to measure cardiac-synchronized pulsatile brain motion on the pixel-by-pixel basis with high temporal∕spatial resolution and sensitivity. PMID:19746774
Ultrasonic Evaluation and Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crawford, Susan L.; Anderson, Michael T.; Diaz, Aaron A.
2015-10-01
Ultrasonic evaluation of materials for material characterization and flaw detection is as simple as manually moving a single-element probe across a speci-men and looking at an oscilloscope display in real time or as complex as automatically (under computer control) scanning a phased-array probe across a specimen and collecting encoded data for immediate or off-line data analyses. The reliability of the results in the second technique is greatly increased because of a higher density of measurements per scanned area and measurements that can be more precisely related to the specimen geometry. This chapter will briefly discuss applications of the collection ofmore » spatially encoded data and focus primarily on the off-line analyses in the form of data imaging. Pacific Northwest National Laboratory (PNNL) has been involved with as-sessing and advancing the reliability of inservice inspections of nuclear power plant components for over 35 years. Modern ultrasonic imaging techniques such as the synthetic aperture focusing technique (SAFT), phased-array (PA) technolo-gy and sound field mapping have undergone considerable improvements to effec-tively assess and better understand material constraints.« less
NASA Astrophysics Data System (ADS)
Afandi, M. I.; Adinanta, H.; Setiono, A.; Qomaruddin; Widiyatmoko, B.
2018-03-01
There are many ways to measure landslide displacement using sensors such as multi-turn potentiometer, fiber optic strain sensor, GPS, geodetic measurement, ground penetrating radar, etc. The proposed way is to use an optical encoder that produces pulse signal with high stability of measurement resolution despite voltage source instability. The landslide measurement using extensometer based on optical encoder has the ability of high resolution for wide range measurement and for a long period of time. The type of incremental optical encoder provides information about the pulse and direction of a rotating shaft by producing quadrature square wave cycle per increment of shaft movement. The result of measurement using 2,000 pulses per resolution of optical encoder has been obtained. Resolution of extensometer is 36 μm with speed limit of about 3.6 cm/s. System test in hazard landslide area has been carried out with good reliability for small landslide displacement monitoring.
NASA Astrophysics Data System (ADS)
Cherri, Abdallah K.
1999-02-01
Trinary signed-digit (TSD) symbolic-substitution-based (SS-based) optical adders, which were recently proposed, are used as the basic modules for designing highly parallel optical multiplications by use of cascaded optical correlators. The proposed multiplications perform carry-free generation of the multiplication partial products of two words in constant time. Also, three different multiplication designs are presented, and new joint spatial encodings for the TSD numbers are introduced. The proposed joint spatial encodings allow one to reduce the SS computation rules involved in optical multiplication. In addition, the proposed joint spatial encodings increase the space bandwidth product of the spatial light modulators of the optical system. This increase is achieved by reduction of the numbers of pixels in the joint spatial encodings for the input TSD operands as well as reduction of the number of pixels used in the proposed matched spatial filters for the optical multipliers.
Cherri, A K
1999-02-10
Trinary signed-digit (TSD) symbolic-substitution-based (SS-based) optical adders, which were recently proposed, are used as the basic modules for designing highly parallel optical multiplications by use of cascaded optical correlators. The proposed multiplications perform carry-free generation of the multiplication partial products of two words in constant time. Also, three different multiplication designs are presented, and new joint spatial encodings for the TSD numbers are introduced. The proposed joint spatial encodings allow one to reduce the SS computation rules involved in optical multiplication. In addition, the proposed joint spatial encodings increase the space-bandwidth product of the spatial light modulators of the optical system. This increase is achieved by reduction of the numbers of pixels in the joint spatial encodings for the input TSD operands as well as reduction of the number of pixels used in the proposed matched spatial filters for the optical multipliers.
Wang, Kun; Matthews, Thomas; Anis, Fatima; Li, Cuiping; Duric, Neb; Anastasio, Mark A
2015-03-01
Ultrasound computed tomography (USCT) holds great promise for improving the detection and management of breast cancer. Because they are based on the acoustic wave equation, waveform inversion-based reconstruction methods can produce images that possess improved spatial resolution properties over those produced by ray-based methods. However, waveform inversion methods are computationally demanding and have not been applied widely in USCT breast imaging. In this work, source encoding concepts are employed to develop an accelerated USCT reconstruction method that circumvents the large computational burden of conventional waveform inversion methods. This method, referred to as the waveform inversion with source encoding (WISE) method, encodes the measurement data using a random encoding vector and determines an estimate of the sound speed distribution by solving a stochastic optimization problem by use of a stochastic gradient descent algorithm. Both computer simulation and experimental phantom studies are conducted to demonstrate the use of the WISE method. The results suggest that the WISE method maintains the high spatial resolution of waveform inversion methods while significantly reducing the computational burden.
Pierce, Benton H; Waring, Jill D; Schacter, Daniel L; Budson, Andrew E
2008-09-01
To examine the use of distinctive materials at encoding on recall-to-reject monitoring processes in aging and Alzheimer disease (AD). AD patients, and to a lesser extent older adults, have shown an impaired ability to use recollection-based monitoring processes (eg, recall-to-reject) to avoid various types of false memories, such as source-based false recognition. Younger adults, healthy older adults, and AD patients engaged in an incidental learning task, in which critical category exemplars were either accompanied by a distinctive picture or were presented as only words. Later, participants studied a series of categorized lists in which several typical exemplars were omitted and were then given a source memory test. Both older and younger adults made more accurate source attributions after picture encoding compared with word-only encoding, whereas AD patients did not exhibit this distinctiveness effect. These results extend those of previous studies showing that monitoring in older adults can be enhanced with distinctive encoding, and suggest that such monitoring processes in AD patients many be insensitive to distinctiveness.
Development of an Intelligent Videogrammetric Wind Tunnel Measurement System
NASA Technical Reports Server (NTRS)
Graves, Sharon S.; Burner, Alpheus W.
2004-01-01
A videogrammetric technique developed at NASA Langley Research Center has been used at five NASA facilities at the Langley and Ames Research Centers for deformation measurements on a number of sting mounted and semispan models. These include high-speed research and transport models tested over a wide range of aerodynamic conditions including subsonic, transonic, and supersonic regimes. The technique, based on digital photogrammetry, has been used to measure model attitude, deformation, and sting bending. In addition, the technique has been used to study model injection rate effects and to calibrate and validate methods for predicting static aeroelastic deformations of wind tunnel models. An effort is currently underway to develop an intelligent videogrammetric measurement system that will be both useful and usable in large production wind tunnels while providing accurate data in a robust and timely manner. Designed to encode a higher degree of knowledge through computer vision, the system features advanced pattern recognition techniques to improve automated location and identification of targets placed on the wind tunnel model to be used for aerodynamic measurements such as attitude and deformation. This paper will describe the development and strategy of the new intelligent system that was used in a recent test at a large transonic wind tunnel.
NASA Astrophysics Data System (ADS)
Zhu, Yizheng; Li, Chengshuai
2016-03-01
Morphological assessment of spermatozoa is of critical importance for in vitro fertilization (IVF), especially intracytoplasmic sperm injection (ICSI)-based IVF. In ICSI, a single sperm cell is selected and injected into an egg to achieve fertilization. The quality of the sperm cell is found to be highly correlated to IVF success. Sperm morphology, such as shape, head birefringence and motility, among others, are typically evaluated under a microscope. Current observation relies on conventional techniques such as differential interference contrast microscopy and polarized light microscopy. Their qualitative nature, however, limits the ability to provide accurate quantitative analysis. Here, we demonstrate quantitative morphological measurement of sperm cells using two types of spectral interferometric techniques, namely spectral modulation interferometry and spectral multiplexing interferometry. Both are based on spectral-domain low coherence interferometry, which is known for its exquisite phase determination ability. While spectral modulation interferometry encodes sample phase in a single spectrum, spectral multiplexing interferometry does so for sample birefringence. Therefore they are capable of highly sensitive phase and birefringence imaging. These features suit well in the imaging of live sperm cells, which are small, dynamic objects with only low to moderate levels of phase and birefringence contrast. We will introduce the operation of both techniques and demonstrate their application to measuring the phase and birefringence morphology of sperm cells.
NASA Astrophysics Data System (ADS)
Zhao, Shengmei; Wang, Le; Liang, Wenqiang; Cheng, Weiwen; Gong, Longyan
2015-10-01
In this paper, we propose a high performance optical encryption (OE) scheme based on computational ghost imaging (GI) with QR code and compressive sensing (CS) technique, named QR-CGI-OE scheme. N random phase screens, generated by Alice, is a secret key and be shared with its authorized user, Bob. The information is first encoded by Alice with QR code, and the QR-coded image is then encrypted with the aid of computational ghost imaging optical system. Here, measurement results from the GI optical system's bucket detector are the encrypted information and be transmitted to Bob. With the key, Bob decrypts the encrypted information to obtain the QR-coded image with GI and CS techniques, and further recovers the information by QR decoding. The experimental and numerical simulated results show that the authorized users can recover completely the original image, whereas the eavesdroppers can not acquire any information about the image even the eavesdropping ratio (ER) is up to 60% at the given measurement times. For the proposed scheme, the number of bits sent from Alice to Bob are reduced considerably and the robustness is enhanced significantly. Meantime, the measurement times in GI system is reduced and the quality of the reconstructed QR-coded image is improved.
The development of an artificial organic networks toolkit for LabVIEW.
Ponce, Hiram; Ponce, Pedro; Molina, Arturo
2015-03-15
Two of the most challenging problems that scientists and researchers face when they want to experiment with new cutting-edge algorithms are the time-consuming for encoding and the difficulties for linking them with other technologies and devices. In that sense, this article introduces the artificial organic networks toolkit for LabVIEW™ (AON-TL) from the implementation point of view. The toolkit is based on the framework provided by the artificial organic networks technique, giving it the potential to add new algorithms in the future based on this technique. Moreover, the toolkit inherits both the rapid prototyping and the easy-to-use characteristics of the LabVIEW™ software (e.g., graphical programming, transparent usage of other softwares and devices, built-in programming event-driven for user interfaces), to make it simple for the end-user. In fact, the article describes the global architecture of the toolkit, with particular emphasis in the software implementation of the so-called artificial hydrocarbon networks algorithm. Lastly, the article includes two case studies for engineering purposes (i.e., sensor characterization) and chemistry applications (i.e., blood-brain barrier partitioning data model) to show the usage of the toolkit and the potential scalability of the artificial organic networks technique. © 2015 Wiley Periodicals, Inc.
High-quality animation of 2D steady vector fields.
Lefer, Wilfrid; Jobard, Bruno; Leduc, Claire
2004-01-01
Simulators for dynamic systems are now widely used in various application areas and raise the need for effective and accurate flow visualization techniques. Animation allows us to depict direction, orientation, and velocity of a vector field accurately. This paper extends a former proposal for a new approach to produce perfectly cyclic and variable-speed animations for 2D steady vector fields (see [1] and [2]). A complete animation of an arbitrary number of frames is encoded in a single image. The animation can be played using the color table animation technique, which is very effective even on low-end workstations. A cyclic set of textures can be produced as well and then encoded in a common animation format or used for texture mapping on 3D objects. As compared to other approaches, the method presented in this paper produces smoother animations and is more effective, both in memory requirements to store the animation, and in computation time.
Optogenetics: a new enlightenment age for zebrafish neurobiology.
Del Bene, Filippo; Wyart, Claire
2012-03-01
Zebrafish became a model of choice for neurobiology because of the transparency of its brain and because of its amenability to genetic manipulation. In particular, at early stages of development the intact larva is an ideal system to apply optical techniques for deep imaging in the nervous system, as well as genetically encoded tools for targeting subsets of neurons and monitoring and manipulating their activity. For these applications,new genetically encoded optical tools, fluorescent sensors, and light-gated channels have been generated,creating the field of "optogenetics." It is now possible to monitor and control neuronal activity with minimal perturbation and unprecedented spatio-temporal resolution.We describe here the main achievements that have occurred in the last decade in imaging and manipulating neuronal activity in intact zebrafish larvae. We provide also examples of functional dissection of neuronal circuits achieved with the applications of these techniques in the visual and locomotor systems.
Integrating mass spectrometry and genomics for cyanobacterial metabolite discovery
Bertin, Matthew J.; Kleigrewe, Karin; Leão, Tiago F.; Gerwick, Lena
2016-01-01
Filamentous marine cyanobacteria produce bioactive natural products with both potential therapeutic value and capacity to be harmful to human health. Genome sequencing has revealed that cyanobacteria have the capacity to produce many more secondary metabolites than have been characterized. The biosynthetic pathways that encode cyanobacterial natural products are mostly uncharacterized, and lack of cyanobacterial genetic tools has largely prevented their heterologous expression. Hence, a combination of cutting edge and traditional techniques has been required to elucidate their secondary metabolite biosynthetic pathways. Here, we review the discovery and refined biochemical understanding of the olefin synthase and fatty acid ACP reductase/aldehyde deformylating oxygenase pathways to hydrocarbons, and the curacin A, jamaicamide A, lyngbyabellin, columbamide, and a trans-acyltransferase macrolactone pathway encoding phormidolide. We integrate into this discussion the use of genomics, mass spectrometric networking, biochemical characterization, and isolation and structure elucidation techniques. PMID:26578313
Implementation of digital image encryption algorithm using logistic function and DNA encoding
NASA Astrophysics Data System (ADS)
Suryadi, MT; Satria, Yudi; Fauzi, Muhammad
2018-03-01
Cryptography is a method to secure information that might be in form of digital image. Based on past research, in order to increase security level of chaos based encryption algorithm and DNA based encryption algorithm, encryption algorithm using logistic function and DNA encoding was proposed. Digital image encryption algorithm using logistic function and DNA encoding use DNA encoding to scramble the pixel values into DNA base and scramble it in DNA addition, DNA complement, and XOR operation. The logistic function in this algorithm used as random number generator needed in DNA complement and XOR operation. The result of the test show that the PSNR values of cipher images are 7.98-7.99 bits, the entropy values are close to 8, the histogram of cipher images are uniformly distributed and the correlation coefficient of cipher images are near 0. Thus, the cipher image can be decrypted perfectly and the encryption algorithm has good resistance to entropy attack and statistical attack.
Tan, Wui Siew; Lewis, Christina L; Horelik, Nicholas E; Pregibon, Daniel C; Doyle, Patrick S; Yi, Hyunmin
2008-11-04
We demonstrate hierarchical assembly of tobacco mosaic virus (TMV)-based nanotemplates with hydrogel-based encoded microparticles via nucleic acid hybridization. TMV nanotemplates possess a highly defined structure and a genetically engineered high density thiol functionality. The encoded microparticles are produced in a high throughput microfluidic device via stop-flow lithography (SFL) and consist of spatially discrete regions containing encoded identity information, an internal control, and capture DNAs. For the hybridization-based assembly, partially disassembled TMVs were programmed with linker DNAs that contain sequences complementary to both the virus 5' end and a selected capture DNA. Fluorescence microscopy, atomic force microscopy (AFM), and confocal microscopy results clearly indicate facile assembly of TMV nanotemplates onto microparticles with high spatial and sequence selectivity. We anticipate that our hybridization-based assembly strategy could be employed to create multifunctional viral-synthetic hybrid materials in a rapid and high-throughput manner. Additionally, we believe that these viral-synthetic hybrid microparticles may find broad applications in high capacity, multiplexed target sensing.
Neutral details associated with emotional events are encoded: evidence from a cued recall paradigm.
Mickley Steinmetz, Katherine R; Knight, Aubrey G; Kensinger, Elizabeth A
2016-11-01
Enhanced emotional memory often comes at the cost of memory for surrounding background information. Narrowed-encoding theories suggest that this is due to narrowed attention for emotional information at encoding, leading to impaired encoding of background information. Recent work has suggested that an encoding-based theory may be insufficient. Here, we examined whether cued recall-instead of previously used recognition memory tasks-would reveal evidence that non-emotional information associated with emotional information was effectively encoded. Participants encoded positive, negative, or neutral objects on neutral backgrounds. At retrieval, they were given either the item or the background as a memory cue and were asked to recall the associated scene element. Counter to narrowed-encoding theories, emotional items were more likely than neutral items to trigger recall of the associated background. This finding suggests that there is a memory trace of this contextual information and that emotional cues may facilitate retrieval of this information.
Rajesh, P G; Thomas, Bejoy; Pammi, V S Chandrasekhar; Kesavadas, C; Alexander, Aley; Radhakrishnan, Ashalatha; Thomas, S V; Menon, R N
2018-05-26
To validate concurrent utility of within-scanner encoding and delayed recognition-memory paradigms to ascertain hippocampal activations during task-based memory fMRI. Memory paradigms were designed for faces, word-pairs and abstract designs. A deep-encoding task was designed comprising of a total of 9 cycles run within a 1.5T MRI scanner. A recall session was performed after 1 h within the scanner using an event-related design. Group analysis was done with 'correct-incorrect' responses applied as parametric modulators in Statistical Parametric Mapping version 8 using boot-strap method to enable estimation of laterality indices (LI) using custom anatomical masks involving the medio-basal temporal structures. Twenty seven subjects with drug-resistant mesial temporal lobe epilepsy due to hippocampal sclerosis (MTLE-HS) [17 patients of left-MTLE and 10 patients of right-MTLE] and 21 right handed age-matched healthy controls (HC) were recruited. For the encoding paradigm blood oxygen level dependent (BOLD) responses in HC demonstrated right laterality for faces, left laterality for word pairs, and bilaterality for design encoding over the regions of interest. Both right and left MTLE-HS groups revealed left lateralisation for word-pair encoding, bilateral activation for face encoding, with design encoding in right MTLE-HS demonstrating a left shift. As opposed to lateralization shown in controls, group analysis of cued-recall BOLD signals acquired within scanner in left MTLE-HS demonstrated right lateralization for word-pairs with bilaterality for faces and designs. The right MTLE-HS group demonstrated bilateral activations for faces, word-pairs and designs. Recall-based fMRI paradigms indicate hippocampal plasticity in MTLE-HS, maximal for word-pair associate recall tasks. Copyright © 2018 Elsevier B.V. All rights reserved.
Hybrid architecture for encoded measurement-based quantum computation
Zwerger, M.; Briegel, H. J.; Dür, W.
2014-01-01
We present a hybrid scheme for quantum computation that combines the modular structure of elementary building blocks used in the circuit model with the advantages of a measurement-based approach to quantum computation. We show how to construct optimal resource states of minimal size to implement elementary building blocks for encoded quantum computation in a measurement-based way, including states for error correction and encoded gates. The performance of the scheme is determined by the quality of the resource states, where within the considered error model a threshold of the order of 10% local noise per particle for fault-tolerant quantum computation and quantum communication. PMID:24946906
Test of mutually unbiased bases for six-dimensional photonic quantum systems
D'Ambrosio, Vincenzo; Cardano, Filippo; Karimi, Ebrahim; Nagali, Eleonora; Santamato, Enrico; Marrucci, Lorenzo; Sciarrino, Fabio
2013-01-01
In quantum information, complementarity of quantum mechanical observables plays a key role. The eigenstates of two complementary observables form a pair of mutually unbiased bases (MUBs). More generally, a set of MUBs consists of bases that are all pairwise unbiased. Except for specific dimensions of the Hilbert space, the maximal sets of MUBs are unknown in general. Even for a dimension as low as six, the identification of a maximal set of MUBs remains an open problem, although there is strong numerical evidence that no more than three simultaneous MUBs do exist. Here, by exploiting a newly developed holographic technique, we implement and test different sets of three MUBs for a single photon six-dimensional quantum state (a “qusix”), encoded exploiting polarization and orbital angular momentum of photons. A close agreement is observed between theory and experiments. Our results can find applications in state tomography, quantitative wave-particle duality, quantum key distribution. PMID:24067548
Test of mutually unbiased bases for six-dimensional photonic quantum systems.
D'Ambrosio, Vincenzo; Cardano, Filippo; Karimi, Ebrahim; Nagali, Eleonora; Santamato, Enrico; Marrucci, Lorenzo; Sciarrino, Fabio
2013-09-25
In quantum information, complementarity of quantum mechanical observables plays a key role. The eigenstates of two complementary observables form a pair of mutually unbiased bases (MUBs). More generally, a set of MUBs consists of bases that are all pairwise unbiased. Except for specific dimensions of the Hilbert space, the maximal sets of MUBs are unknown in general. Even for a dimension as low as six, the identification of a maximal set of MUBs remains an open problem, although there is strong numerical evidence that no more than three simultaneous MUBs do exist. Here, by exploiting a newly developed holographic technique, we implement and test different sets of three MUBs for a single photon six-dimensional quantum state (a "qusix"), encoded exploiting polarization and orbital angular momentum of photons. A close agreement is observed between theory and experiments. Our results can find applications in state tomography, quantitative wave-particle duality, quantum key distribution.
Research on Image Encryption Based on DNA Sequence and Chaos Theory
NASA Astrophysics Data System (ADS)
Tian Zhang, Tian; Yan, Shan Jun; Gu, Cheng Yan; Ren, Ran; Liao, Kai Xin
2018-04-01
Nowadays encryption is a common technique to protect image data from unauthorized access. In recent years, many scientists have proposed various encryption algorithms based on DNA sequence to provide a new idea for the design of image encryption algorithm. Therefore, a new method of image encryption based on DNA computing technology is proposed in this paper, whose original image is encrypted by DNA coding and 1-D logistic chaotic mapping. First, the algorithm uses two modules as the encryption key. The first module uses the real DNA sequence, and the second module is made by one-dimensional logistic chaos mapping. Secondly, the algorithm uses DNA complementary rules to encode original image, and uses the key and DNA computing technology to compute each pixel value of the original image, so as to realize the encryption of the whole image. Simulation results show that the algorithm has good encryption effect and security.
Pulse Vector-Excitation Speech Encoder
NASA Technical Reports Server (NTRS)
Davidson, Grant; Gersho, Allen
1989-01-01
Proposed pulse vector-excitation speech encoder (PVXC) encodes analog speech signals into digital representation for transmission or storage at rates below 5 kilobits per second. Produces high quality of reconstructed speech, but with less computation than required by comparable speech-encoding systems. Has some characteristics of multipulse linear predictive coding (MPLPC) and of code-excited linear prediction (CELP). System uses mathematical model of vocal tract in conjunction with set of excitation vectors and perceptually-based error criterion to synthesize natural-sounding speech.
Molecular diagnostics of periodontitis.
Korona-Głowniak, Izabela; Siwiec, Radosław; Berger, Marcin; Malm, Anna; Szymańska, Jolanta
2017-01-28
The microorganisms that form dental plaque are the main cause of periodontitis. Their identification and the understanding of the complex relationships and interactions that involve these microorganisms, environmental factors and the host's health status enable improvement in diagnostics and targeted therapy in patients with periodontitis. To this end, molecular diagnostics techniques (both techniques based on the polymerase chain reaction and those involving nucleic acid analysis via hybridization) come increasingly into use. On the basis of a literature review, the following methods are presented: polymerase chain reaction (PCR), real-time polymerase chain reaction (real-time PCR), 16S rRNA-encoding gene sequencing, checkerboard and reverse-capture checkerboard hybridization, microarrays, denaturing gradient gel electrophoresis (DGGE), temperature gradient gel electrophoresis (TGGE), as well as terminal restriction fragment length polymorphism (TRFLP) and next generation sequencing (NGS). The advantages and drawbacks of each method in the examination of periopathogens are indicated. The techniques listed above allow fast detection of even small quantities of pathogen present in diagnostic material and prove particularly useful to detect microorganisms that are difficult or impossible to grow in a laboratory.
Combined Fat Imaging/Look Locker for mapping of lipid spin-lattice (T1) relaxation time
NASA Astrophysics Data System (ADS)
Jihyun Park, Annie; Yung, Andrew; Kozlowski, Piotr; Reinsberg, Stefan
2012-10-01
Tumor hypoxia is a main problem arising in the treatment of cancer due to its resistance to cytotoxic therapy such as radiation and chemotherapy, and selection for more aggressive tumor phenotypes. Attempts to improve and quantify tumor oxygenation are in development and tools to assess the success of such schemes are required. Monitoring oxygen level with MRI using T1 based method (where oxygen acts as T1 shortening agent) is a dynamic and noninvasive way to study tumor characteristics. The method's sensitivity to oxygen is higher in lipids than in water due to higher oxygen solubility in lipid. Our study aims to develop a time-efficient method to spatially map T1 of fat inside the tumor. We are combining two techniques: Fat/Water imaging and Look Locker (a rapid T1 measurement technique). Fat/Water Imaging is done with either Dixon or Direct Phase Encoding (DPE) method. The combination of these techniques poses new challenges that are tackled using spin dynamics simulations as well as experiments in vitro and in vivo.
Otazo, Ricardo; Lin, Fa-Hsuan; Wiggins, Graham; Jordan, Ramiro; Sodickson, Daniel; Posse, Stefan
2009-01-01
Standard parallel magnetic resonance imaging (MRI) techniques suffer from residual aliasing artifacts when the coil sensitivities vary within the image voxel. In this work, a parallel MRI approach known as Superresolution SENSE (SURE-SENSE) is presented in which acceleration is performed by acquiring only the central region of k-space instead of increasing the sampling distance over the complete k-space matrix and reconstruction is explicitly based on intra-voxel coil sensitivity variation. In SURE-SENSE, parallel MRI reconstruction is formulated as a superresolution imaging problem where a collection of low resolution images acquired with multiple receiver coils are combined into a single image with higher spatial resolution using coil sensitivities acquired with high spatial resolution. The effective acceleration of conventional gradient encoding is given by the gain in spatial resolution, which is dictated by the degree of variation of the different coil sensitivity profiles within the low resolution image voxel. Since SURE-SENSE is an ill-posed inverse problem, Tikhonov regularization is employed to control noise amplification. Unlike standard SENSE, for which acceleration is constrained to the phase-encoding dimension/s, SURE-SENSE allows acceleration along all encoding directions — for example, two-dimensional acceleration of a 2D echo-planar acquisition. SURE-SENSE is particularly suitable for low spatial resolution imaging modalities such as spectroscopic imaging and functional imaging with high temporal resolution. Application to echo-planar functional and spectroscopic imaging in human brain is presented using two-dimensional acceleration with a 32-channel receiver coil. PMID:19341804
Joint Source-Channel Decoding of Variable-Length Codes with Soft Information: A Survey
NASA Astrophysics Data System (ADS)
Guillemot, Christine; Siohan, Pierre
2005-12-01
Multimedia transmission over time-varying wireless channels presents a number of challenges beyond existing capabilities conceived so far for third-generation networks. Efficient quality-of-service (QoS) provisioning for multimedia on these channels may in particular require a loosening and a rethinking of the layer separation principle. In that context, joint source-channel decoding (JSCD) strategies have gained attention as viable alternatives to separate decoding of source and channel codes. A statistical framework based on hidden Markov models (HMM) capturing dependencies between the source and channel coding components sets the foundation for optimal design of techniques of joint decoding of source and channel codes. The problem has been largely addressed in the research community, by considering both fixed-length codes (FLC) and variable-length source codes (VLC) widely used in compression standards. Joint source-channel decoding of VLC raises specific difficulties due to the fact that the segmentation of the received bitstream into source symbols is random. This paper makes a survey of recent theoretical and practical advances in the area of JSCD with soft information of VLC-encoded sources. It first describes the main paths followed for designing efficient estimators for VLC-encoded sources, the key component of the JSCD iterative structure. It then presents the main issues involved in the application of the turbo principle to JSCD of VLC-encoded sources as well as the main approaches to source-controlled channel decoding. This survey terminates by performance illustrations with real image and video decoding systems.
Quantum annealing correction with minor embedding
NASA Astrophysics Data System (ADS)
Vinci, Walter; Albash, Tameem; Paz-Silva, Gerardo; Hen, Itay; Lidar, Daniel A.
2015-10-01
Quantum annealing provides a promising route for the development of quantum optimization devices, but the usefulness of such devices will be limited in part by the range of implementable problems as dictated by hardware constraints. To overcome constraints imposed by restricted connectivity between qubits, a larger set of interactions can be approximated using minor embedding techniques whereby several physical qubits are used to represent a single logical qubit. However, minor embedding introduces new types of errors due to its approximate nature. We introduce and study quantum annealing correction schemes designed to improve the performance of quantum annealers in conjunction with minor embedding, thus leading to a hybrid scheme defined over an encoded graph. We argue that this scheme can be efficiently decoded using an energy minimization technique provided the density of errors does not exceed the per-site percolation threshold of the encoded graph. We test the hybrid scheme using a D-Wave Two processor on problems for which the encoded graph is a two-level grid and the Ising model is known to be NP-hard. The problems we consider are frustrated Ising model problem instances with "planted" (a priori known) solutions. Applied in conjunction with optimized energy penalties and decoding techniques, we find that this approach enables the quantum annealer to solve minor embedded instances with significantly higher success probability than it would without error correction. Our work demonstrates that quantum annealing correction can and should be used to improve the robustness of quantum annealing not only for natively embeddable problems but also when minor embedding is used to extend the connectivity of physical devices.
Bubble masks for time-encoded imaging of fast neutrons.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brubaker, Erik; Brennan, James S.; Marleau, Peter
2013-09-01
Time-encoded imaging is an approach to directional radiation detection that is being developed at SNL with a focus on fast neutron directional detection. In this technique, a time modulation of a detected neutron signal is inducedtypically, a moving mask that attenuates neutrons with a time structure that depends on the source position. An important challenge in time-encoded imaging is to develop high-resolution two-dimensional imaging capabilities; building a mechanically moving high-resolution mask presents challenges both theoretical and technical. We have investigated an alternative to mechanical masks that replaces the solid mask with a liquid such as mineral oil. Instead of fixedmore » blocks of solid material that move in pre-defined patterns, the oil is contained in tubing structures, and carefully introduced air gapsbubblespropagate through the tubing, generating moving patterns of oil mask elements and air apertures. Compared to current moving-mask techniques, the bubble mask is simple, since mechanical motion is replaced by gravity-driven bubble propagation; it is flexible, since arbitrary bubble patterns can be generated by a software-controlled valve actuator; and it is potentially high performance, since the tubing and bubble size can be tuned for high-resolution imaging requirements. We have built and tested various single-tube mask elements, and will present results on bubble introduction and propagation as a function of tubing size and cross-sectional shape; real-time bubble position tracking; neutron source imaging tests; and reconstruction techniques demonstrated on simple test data as well as a simulated full detector system.« less
The Systems Biology Markup Language (SBML) Level 3 Package: Flux Balance Constraints.
Olivier, Brett G; Bergmann, Frank T
2015-09-04
Constraint-based modeling is a well established modelling methodology used to analyze and study biological networks on both a medium and genome scale. Due to their large size, genome scale models are typically analysed using constraint-based optimization techniques. One widely used method is Flux Balance Analysis (FBA) which, for example, requires a modelling description to include: the definition of a stoichiometric matrix, an objective function and bounds on the values that fluxes can obtain at steady state. The Flux Balance Constraints (FBC) Package extends SBML Level 3 and provides a standardized format for the encoding, exchange and annotation of constraint-based models. It includes support for modelling concepts such as objective functions, flux bounds and model component annotation that facilitates reaction balancing. The FBC package establishes a base level for the unambiguous exchange of genome-scale, constraint-based models, that can be built upon by the community to meet future needs (e. g. by extending it to cover dynamic FBC models).
The Systems Biology Markup Language (SBML) Level 3 Package: Flux Balance Constraints.
Olivier, Brett G; Bergmann, Frank T
2015-06-01
Constraint-based modeling is a well established modelling methodology used to analyze and study biological networks on both a medium and genome scale. Due to their large size, genome scale models are typically analysed using constraint-based optimization techniques. One widely used method is Flux Balance Analysis (FBA) which, for example, requires a modelling description to include: the definition of a stoichiometric matrix, an objective function and bounds on the values that fluxes can obtain at steady state. The Flux Balance Constraints (FBC) Package extends SBML Level 3 and provides a standardized format for the encoding, exchange and annotation of constraint-based models. It includes support for modelling concepts such as objective functions, flux bounds and model component annotation that facilitates reaction balancing. The FBC package establishes a base level for the unambiguous exchange of genome-scale, constraint-based models, that can be built upon by the community to meet future needs (e. g. by extending it to cover dynamic FBC models).
Genome network medicine: innovation to overcome huge challenges in cancer therapy.
Roukos, Dimitrios H
2014-01-01
The post-ENCODE era shapes now a new biomedical research direction for understanding transcriptional and signaling networks driving gene expression and core cellular processes such as cell fate, survival, and apoptosis. Over the past half century, the Francis Crick 'central dogma' of single n gene/protein-phenotype (trait/disease) has defined biology, human physiology, disease, diagnostics, and drugs discovery. However, the ENCODE project and several other genomic studies using high-throughput sequencing technologies, computational strategies, and imaging techniques to visualize regulatory networks, provide evidence that transcriptional process and gene expression are regulated by highly complex dynamic molecular and signaling networks. This Focus article describes the linear experimentation-based limitations of diagnostics and therapeutics to cure advanced cancer and the need to move on from reductionist to network-based approaches. With evident a wide genomic heterogeneity, the power and challenges of next-generation sequencing (NGS) technologies to identify a patient's personal mutational landscape for tailoring the best target drugs in the individual patient are discussed. However, the available drugs are not capable of targeting aberrant signaling networks and research on functional transcriptional heterogeneity and functional genome organization is poorly understood. Therefore, the future clinical genome network medicine aiming at overcoming multiple problems in the new fields of regulatory DNA mapping, noncoding RNA, enhancer RNAs, and dynamic complexity of transcriptional circuitry are also discussed expecting in new innovation technology and strong appreciation of clinical data and evidence-based medicine. The problematic and potential solutions in the discovery of next-generation, molecular, and signaling circuitry-based biomarkers and drugs are explored. © 2013 Wiley Periodicals, Inc.
Classification Techniques for Digital Map Compression
1989-03-01
classification improved the performance of the K-means classification algorithm resulting in a compression of 8.06:1 with Lempel - Ziv coding. Run-length coding... compression performance are run-length coding [2], [8] and Lempel - Ziv coding 110], [11]. These techniques are chosen because they are most efficient when...investigated. After the classification, some standard file compression methods, such as Lempel - Ziv and run-length encoding were applied to the
Source encoding in multi-parameter full waveform inversion
NASA Astrophysics Data System (ADS)
Matharu, Gian; Sacchi, Mauricio D.
2018-04-01
Source encoding techniques alleviate the computational burden of sequential-source full waveform inversion (FWI) by considering multiple sources simultaneously rather than independently. The reduced data volume requires fewer forward/adjoint simulations per non-linear iteration. Applications of source-encoded full waveform inversion (SEFWI) have thus far focused on monoparameter acoustic inversion. We extend SEFWI to the multi-parameter case with applications presented for elastic isotropic inversion. Estimating multiple parameters can be challenging as perturbations in different parameters can prompt similar responses in the data. We investigate the relationship between source encoding and parameter trade-off by examining the multi-parameter source-encoded Hessian. Probing of the Hessian demonstrates the convergence of the expected source-encoded Hessian, to that of conventional FWI. The convergence implies that the parameter trade-off in SEFWI is comparable to that observed in FWI. A series of synthetic inversions are conducted to establish the feasibility of source-encoded multi-parameter FWI. We demonstrate that SEFWI requires fewer overall simulations than FWI to achieve a target model error for a range of first-order optimization methods. An inversion for spatially inconsistent P - (α) and S-wave (β) velocity models, corroborates the expectation of comparable parameter trade-off in SEFWI and FWI. The final example demonstrates a shortcoming of SEFWI when confronted with time-windowing in data-driven inversion schemes. The limitation is a consequence of the implicit fixed-spread acquisition assumption in SEFWI. Alternative objective functions, namely the normalized cross-correlation and L1 waveform misfit, do not enable SEFWI to overcome this limitation.
Anisotropic nanomaterials: Synthesis, optical and magnetic properties, and applications
NASA Astrophysics Data System (ADS)
Banholzer, Matthew John
As nanoscience and nanotechnology mature, anisotropic metal nanostructures are emerging in a variety of contexts as valuable class of nanostructures due to their distinctive attributes. With unique properties ranging from optical to magnetic and beyond, these structures are useful in many new applications. Chapter two discusses the nanodisk code: a linear array of metal disk pairs that serve as surface-enhanced Raman scattering substrates. These multiplexing structures employ a binary encoding scheme, perform better than previous nanowires designs (in the context of SERS) and are useful for both convert encoding and tagging of substrates (based both on spatial disk position and spectroscopic response) as well as biomolecule detection (e.g. DNA). Chapter three describes the development of improved, silver-based nanodisk code structures. Work was undertaken to generate structures with high yield and reproducibility and to reoptimize the geometry of each disk pair for maximum Raman enhancement. The improved silver structures exhibit greater enhancement than Au structures (leading to lower DNA detection limits), convey additional flexibility, and enable trinary encoding schemes where far more unique structures can be created. Chapter four considers the effect of roughness on the plasmonic properties of nanorod structures and introduces a novel method to smooth the end-surfaces of nanorods structures. The smoothing technique is based upon a two-step process relying upon diffusion control during nanowires growth and selective oxidation after each step of synthesis is complete. Empirical and theoretical work show that smoothed nanostructures have superior and controllable optical properties. Chapter five concerns silica-encapsulated gold nanoprisms. This encapsulation allows these highly sensitive prisms to remain stable and protected in solution, enabling their use as class-leading sensors. Theoretical study complements the empirical work, exploring the effect of encapsulation on the SPR of these structures. Chapter six focuses on the magnetic properties of Au-Ni heterostructures. In addition to demonstration of nanoconfinement effects based upon the anisotropy of the nanorods/nanodisk structure, the magnetic coupling of rod-disk heterostructures is examined. Subsequent investigations suggest that the magnetic behavior of disks can be influenced by nearby rod segments, leading to the creation of a three-state spin system that may prove useful in device applications.
Clemens, Benjamin; Regenbogen, Christina; Koch, Kathrin; Backes, Volker; Romanczuk-Seiferth, Nina; Pauly, Katharina; Shah, N Jon; Schneider, Frank; Habel, Ute; Kellermann, Thilo
2015-01-01
In functional magnetic resonance imaging (fMRI) studies that apply a "subsequent memory" approach, successful encoding is indicated by increased fMRI activity during the encoding phase for hits vs. misses, in areas underlying memory encoding such as the hippocampal formation. Signal-detection theory (SDT) can be used to analyze memory-related fMRI activity as a function of the participant's memory trace strength (d(')). The goal of the present study was to use SDT to examine the relationship between fMRI activity during incidental encoding and participants' recognition performance. To implement a new approach, post-experimental group assignment into High- or Low Performers (HP or LP) was based on 29 healthy participants' recognition performance, assessed with SDT. The analyses focused on the interaction between the factors group (HP vs. LP) and recognition performance (hits vs. misses). A whole-brain analysis revealed increased activation for HP vs. LP during incidental encoding for remembered vs. forgotten items (hits > misses) in the insula/temporo-parietal junction (TPJ) and the fusiform gyrus (FFG). Parameter estimates in these regions exhibited a significant positive correlation with d('). As these brain regions are highly relevant for salience detection (insula), stimulus-driven attention (TPJ), and content-specific processing of mnemonic stimuli (FFG), we suggest that HPs' elevated memory performance was associated with enhanced attentional and content-specific sensory processing during the encoding phase. We provide first correlative evidence that encoding-related activity in content-specific sensory areas and content-independent attention and salience detection areas influences memory performance in a task with incidental encoding of facial stimuli. Based on our findings, we discuss whether the aforementioned group differences in brain activity during incidental encoding might constitute the basis of general differences in memory performance between HP and LP.
A Bloom Filter-Powered Technique Supporting Scalable Semantic Discovery in Data Service Networks
NASA Astrophysics Data System (ADS)
Zhang, J.; Shi, R.; Bao, Q.; Lee, T. J.; Ramachandran, R.
2016-12-01
More and more Earth data analytics software products are published onto the Internet as a service, in the format of either heavyweight WSDL service or lightweight RESTful API. Such reusable data analytics services form a data service network, which allows Earth scientists to compose (mashup) services into value-added ones. Therefore, it is important to have a technique that is capable of helping Earth scientists quickly identify appropriate candidate datasets and services in the global data service network. Most existing services discovery techniques, however, mainly rely on syntax or semantics-based service matchmaking between service requests and available services. Since the scale of the data service network is increasing rapidly, the run-time computational cost will soon become a bottleneck. To address this issue, this project presents a way of applying network routing mechanism to facilitate data service discovery in a service network, featuring scalability and performance. Earth data services are automatically annotated in Web Ontology Language for Services (OWL-S) based on their metadata, semantic information, and usage history. Deterministic Annealing (DA) technique is applied to dynamically organize annotated data services into a hierarchical network, where virtual routers are created to represent semantic local network featuring leading terms. Afterwards Bloom Filters are generated over virtual routers. A data service search request is transformed into a network routing problem in order to quickly locate candidate services through network hierarchy. A neural network-powered technique is applied to assure network address encoding and routing performance. A series of empirical study has been conducted to evaluate the applicability and effectiveness of the proposed approach.
Exploiting chromatic aberration to spectrally encode depth in reflectance confocal microscopy
NASA Astrophysics Data System (ADS)
Carrasco-Zevallos, Oscar; Shelton, Ryan L.; Olsovsky, Cory; Saldua, Meagan; Applegate, Brian E.; Maitland, Kristen C.
2011-06-01
We present chromatic confocal microscopy as a technique to axially scan the sample by spectrally encoding depth information to avoid mechanical scanning of the lens or sample. We have achieved an 800 μm focal shift over a range of 680-1080 nm using a hyperchromat lens as the imaging lens. A more complex system that incorporates a water immersion objective to improve axial resolution was built and tested. We determined that increasing objective magnification decreases chromatic shift while improving axial resolution. Furthermore, collimating after the hyperchromat at longer wavelengths yields an increase in focal shift.
Ontology-Based Search of Genomic Metadata.
Fernandez, Javier D; Lenzerini, Maurizio; Masseroli, Marco; Venco, Francesco; Ceri, Stefano
2016-01-01
The Encyclopedia of DNA Elements (ENCODE) is a huge and still expanding public repository of more than 4,000 experiments and 25,000 data files, assembled by a large international consortium since 2007; unknown biological knowledge can be extracted from these huge and largely unexplored data, leading to data-driven genomic, transcriptomic, and epigenomic discoveries. Yet, search of relevant datasets for knowledge discovery is limitedly supported: metadata describing ENCODE datasets are quite simple and incomplete, and not described by a coherent underlying ontology. Here, we show how to overcome this limitation, by adopting an ENCODE metadata searching approach which uses high-quality ontological knowledge and state-of-the-art indexing technologies. Specifically, we developed S.O.S. GeM (http://www.bioinformatics.deib.polimi.it/SOSGeM/), a system supporting effective semantic search and retrieval of ENCODE datasets. First, we constructed a Semantic Knowledge Base by starting with concepts extracted from ENCODE metadata, matched to and expanded on biomedical ontologies integrated in the well-established Unified Medical Language System. We prove that this inference method is sound and complete. Then, we leveraged the Semantic Knowledge Base to semantically search ENCODE data from arbitrary biologists' queries. This allows correctly finding more datasets than those extracted by a purely syntactic search, as supported by the other available systems. We empirically show the relevance of found datasets to the biologists' queries.
Dying to remember, remembering to survive: mortality salience and survival processing.
Burns, Daniel J; Hart, Joshua; Kramer, Melanie E; Burns, Amy D
2014-01-01
Processing items for their relevance to survival improves recall for those items relative to numerous other deep processing encoding techniques. Perhaps related, placing individuals in a mortality salient state has also been shown to enhance retention of items encoded after the morality salience manipulation (e.g., in a pleasantness rating task), a phenomenon we dubbed the "dying-to-remember" (DTR) effect. The experiments reported here further explored the effect and tested the possibility that the DTR effect is related to survival processing. Experiment 1 replicated the effect using different encoding tasks, demonstrating that the effect is not dependent on the pleasantness task. In Experiment 2 the DTR effect was associated with increases in item-specific processing, not relational processing, according to several indices. Experiment 3 replicated the main results of Experiment 2, and tested the effects of mortality salience and survival processing within the same experiment. The DTR effect and its associated difference in item-specific processing were completely eliminated when the encoding task required survival processing. These results are consistent with the interpretation that the mechanisms responsible for survival processing and DTR effects are overlapping.
k-t Acceleration in pure phase encode MRI to monitor dynamic flooding processes in rock core plugs
NASA Astrophysics Data System (ADS)
Xiao, Dan; Balcom, Bruce J.
2014-06-01
Monitoring the pore system in sedimentary rocks with MRI when fluids are introduced is very important in the study of petroleum reservoirs and enhanced oil recovery. However, the lengthy acquisition time of each image, with pure phase encode MRI, limits the temporal resolution. Spatiotemporal correlations can be exploited to undersample the k-t space data. The stacked frames/profiles can be well approximated by an image matrix with rank deficiency, which can be recovered by nonlinear nuclear norm minimization. Sparsity of the x-t image can also be exploited for nonlinear reconstruction. In this work the results of a low rank matrix completion technique were compared with k-t sparse compressed sensing. These methods are demonstrated with one dimensional SPRITE imaging of a Bentheimer rock core plug and SESPI imaging of a Berea rock core plug, but can be easily extended to higher dimensionality and/or other pure phase encode measurements. These ideas will enable higher dimensionality pure phase encode MRI studies of dynamic flooding processes in low magnetic field systems.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1994-01-01
The unequal error protection capabilities of convolutional and trellis codes are studied. In certain environments, a discrepancy in the amount of error protection placed on different information bits is desirable. Examples of environments which have data of varying importance are a number of speech coding algorithms, packet switched networks, multi-user systems, embedded coding systems, and high definition television. Encoders which provide more than one level of error protection to information bits are called unequal error protection (UEP) codes. In this work, the effective free distance vector, d, is defined as an alternative to the free distance as a primary performance parameter for UEP convolutional and trellis encoders. For a given (n, k), convolutional encoder, G, the effective free distance vector is defined as the k-dimensional vector d = (d(sub 0), d(sub 1), ..., d(sub k-1)), where d(sub j), the j(exp th) effective free distance, is the lowest Hamming weight among all code sequences that are generated by input sequences with at least one '1' in the j(exp th) position. It is shown that, although the free distance for a code is unique to the code and independent of the encoder realization, the effective distance vector is dependent on the encoder realization.
Fast MPEG-CDVS Encoder With GPU-CPU Hybrid Computing.
Duan, Ling-Yu; Sun, Wei; Zhang, Xinfeng; Wang, Shiqi; Chen, Jie; Yin, Jianxiong; See, Simon; Huang, Tiejun; Kot, Alex C; Gao, Wen
2018-05-01
The compact descriptors for visual search (CDVS) standard from ISO/IEC moving pictures experts group has succeeded in enabling the interoperability for efficient and effective image retrieval by standardizing the bitstream syntax of compact feature descriptors. However, the intensive computation of a CDVS encoder unfortunately hinders its widely deployment in industry for large-scale visual search. In this paper, we revisit the merits of low complexity design of CDVS core techniques and present a very fast CDVS encoder by leveraging the massive parallel execution resources of graphics processing unit (GPU). We elegantly shift the computation-intensive and parallel-friendly modules to the state-of-the-arts GPU platforms, in which the thread block allocation as well as the memory access mechanism are jointly optimized to eliminate performance loss. In addition, those operations with heavy data dependence are allocated to CPU for resolving the extra but non-necessary computation burden for GPU. Furthermore, we have demonstrated the proposed fast CDVS encoder can work well with those convolution neural network approaches which enables to leverage the advantages of GPU platforms harmoniously, and yield significant performance improvements. Comprehensive experimental results over benchmarks are evaluated, which has shown that the fast CDVS encoder using GPU-CPU hybrid computing is promising for scalable visual search.
Yu, Jingyin; Tehrim, Sadia; Zhang, Fengqi; Tong, Chaobo; Huang, Junyan; Cheng, Xiaohui; Dong, Caihua; Zhou, Yanqiu; Qin, Rui; Hua, Wei; Liu, Shengyi
2014-01-03
Plant disease resistance (R) genes with the nucleotide binding site (NBS) play an important role in offering resistance to pathogens. The availability of complete genome sequences of Brassica oleracea and Brassica rapa provides an important opportunity for researchers to identify and characterize NBS-encoding R genes in Brassica species and to compare with analogues in Arabidopsis thaliana based on a comparative genomics approach. However, little is known about the evolutionary fate of NBS-encoding genes in the Brassica lineage after split from A. thaliana. Here we present genome-wide analysis of NBS-encoding genes in B. oleracea, B. rapa and A. thaliana. Through the employment of HMM search and manual curation, we identified 157, 206 and 167 NBS-encoding genes in B. oleracea, B. rapa and A. thaliana genomes, respectively. Phylogenetic analysis among 3 species classified NBS-encoding genes into 6 subgroups. Tandem duplication and whole genome triplication (WGT) analyses revealed that after WGT of the Brassica ancestor, NBS-encoding homologous gene pairs on triplicated regions in Brassica ancestor were deleted or lost quickly, but NBS-encoding genes in Brassica species experienced species-specific gene amplification by tandem duplication after divergence of B. rapa and B. oleracea. Expression profiling of NBS-encoding orthologous gene pairs indicated the differential expression pattern of retained orthologous gene copies in B. oleracea and B. rapa. Furthermore, evolutionary analysis of CNL type NBS-encoding orthologous gene pairs among 3 species suggested that orthologous genes in B. rapa species have undergone stronger negative selection than those in B .oleracea species. But for TNL type, there are no significant differences in the orthologous gene pairs between the two species. This study is first identification and characterization of NBS-encoding genes in B. rapa and B. oleracea based on whole genome sequences. Through tandem duplication and whole genome triplication analysis in B. oleracea, B. rapa and A. thaliana genomes, our study provides insight into the evolutionary history of NBS-encoding genes after divergence of A. thaliana and the Brassica lineage. These results together with expression pattern analysis of NBS-encoding orthologous genes provide useful resource for functional characterization of these genes and genetic improvement of relevant crops.
A rapid and robust gradient measurement technique using dynamic single-point imaging.
Jang, Hyungseok; McMillan, Alan B
2017-09-01
We propose a new gradient measurement technique based on dynamic single-point imaging (SPI), which allows simple, rapid, and robust measurement of k-space trajectory. To enable gradient measurement, we utilize the variable field-of-view (FOV) property of dynamic SPI, which is dependent on gradient shape. First, one-dimensional (1D) dynamic SPI data are acquired from a targeted gradient axis, and then relative FOV scaling factors between 1D images or k-spaces at varying encoding times are found. These relative scaling factors are the relative k-space position that can be used for image reconstruction. The gradient measurement technique also can be used to estimate the gradient impulse response function for reproducible gradient estimation as a linear time invariant system. The proposed measurement technique was used to improve reconstructed image quality in 3D ultrashort echo, 2D spiral, and multi-echo bipolar gradient-echo imaging. In multi-echo bipolar gradient-echo imaging, measurement of the k-space trajectory allowed the use of a ramp-sampled trajectory for improved acquisition speed (approximately 30%) and more accurate quantitative fat and water separation in a phantom. The proposed dynamic SPI-based method allows fast k-space trajectory measurement with a simple implementation and no additional hardware for improved image quality. Magn Reson Med 78:950-962, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Low-complexity video encoding method for wireless image transmission in capsule endoscope.
Takizawa, Kenichi; Hamaguchi, Kiyoshi
2010-01-01
This paper presents a low-complexity video encoding method applicable for wireless image transmission in capsule endoscopes. This encoding method is based on Wyner-Ziv theory, in which side information available at a transmitter is treated as side information at its receiver. Therefore complex processes in video encoding, such as estimation of the motion vector, are moved to the receiver side, which has a larger-capacity battery. As a result, the encoding process is only to decimate coded original data through channel coding. We provide a performance evaluation for a low-density parity check (LDPC) coding method in the AWGN channel.
Page layout analysis and classification for complex scanned documents
NASA Astrophysics Data System (ADS)
Erkilinc, M. Sezer; Jaber, Mustafa; Saber, Eli; Bauer, Peter; Depalov, Dejan
2011-09-01
A framework for region/zone classification in color and gray-scale scanned documents is proposed in this paper. The algorithm includes modules for extracting text, photo, and strong edge/line regions. Firstly, a text detection module which is based on wavelet analysis and Run Length Encoding (RLE) technique is employed. Local and global energy maps in high frequency bands of the wavelet domain are generated and used as initial text maps. Further analysis using RLE yields a final text map. The second module is developed to detect image/photo and pictorial regions in the input document. A block-based classifier using basis vector projections is employed to identify photo candidate regions. Then, a final photo map is obtained by applying probabilistic model based on Markov random field (MRF) based maximum a posteriori (MAP) optimization with iterated conditional mode (ICM). The final module detects lines and strong edges using Hough transform and edge-linkages analysis, respectively. The text, photo, and strong edge/line maps are combined to generate a page layout classification of the scanned target document. Experimental results and objective evaluation show that the proposed technique has a very effective performance on variety of simple and complex scanned document types obtained from MediaTeam Oulu document database. The proposed page layout classifier can be used in systems for efficient document storage, content based document retrieval, optical character recognition, mobile phone imagery, and augmented reality.
Breast ultrasound computed tomography using waveform inversion with source encoding
NASA Astrophysics Data System (ADS)
Wang, Kun; Matthews, Thomas; Anis, Fatima; Li, Cuiping; Duric, Neb; Anastasio, Mark A.
2015-03-01
Ultrasound computed tomography (USCT) holds great promise for improving the detection and management of breast cancer. Because they are based on the acoustic wave equation, waveform inversion-based reconstruction methods can produce images that possess improved spatial resolution properties over those produced by ray-based methods. However, waveform inversion methods are computationally demanding and have not been applied widely in USCT breast imaging. In this work, source encoding concepts are employed to develop an accelerated USCT reconstruction method that circumvents the large computational burden of conventional waveform inversion methods. This method, referred to as the waveform inversion with source encoding (WISE) method, encodes the measurement data using a random encoding vector and determines an estimate of the speed-of-sound distribution by solving a stochastic optimization problem by use of a stochastic gradient descent algorithm. Computer-simulation studies are conducted to demonstrate the use of the WISE method. Using a single graphics processing unit card, each iteration can be completed within 25 seconds for a 128 × 128 mm2 reconstruction region. The results suggest that the WISE method maintains the high spatial resolution of waveform inversion methods while significantly reducing the computational burden.
Optical image encryption method based on incoherent imaging and polarized light encoding
NASA Astrophysics Data System (ADS)
Wang, Q.; Xiong, D.; Alfalou, A.; Brosseau, C.
2018-05-01
We propose an incoherent encoding system for image encryption based on a polarized encoding method combined with an incoherent imaging. Incoherent imaging is the core component of this proposal, in which the incoherent point-spread function (PSF) of the imaging system serves as the main key to encode the input intensity distribution thanks to a convolution operation. An array of retarders and polarizers is placed on the input plane of the imaging structure to encrypt the polarized state of light based on Mueller polarization calculus. The proposal makes full use of randomness of polarization parameters and incoherent PSF so that a multidimensional key space is generated to deal with illegal attacks. Mueller polarization calculus and incoherent illumination of imaging structure ensure that only intensity information is manipulated. Another key advantage is that complicated processing and recording related to a complex-valued signal are avoided. The encoded information is just an intensity distribution, which is advantageous for data storage and transition because information expansion accompanying conventional encryption methods is also avoided. The decryption procedure can be performed digitally or using optoelectronic devices. Numerical simulation tests demonstrate the validity of the proposed scheme.
Zhang, Ziheng; Dione, Donald P.; Brown, Peter B.; Shapiro, Erik M.; Sinusas, Albert J.; Sampath, Smita
2011-01-01
A novel MR imaging technique, spatial modulation of magnetization with polarity alternating velocity encoding (SPAMM-PAV), is presented to simultaneously examine the left ventricular early diastolic temporal relationships between myocardial deformation and intra-cavity hemodynamics with a high temporal resolution of 14 ms. This approach is initially evaluated in a dynamic flow and tissue mimicking phantom. A comparison of regional longitudinal strains and intra-cavity pressure differences (integration of computed in-plane pressure gradients within a selected region) in relation to mitral valve inflow velocities is performed in eight normal volunteers. Our results demonstrate that apical regions have higher strain rates (0.145 ± 0.005 %/ms) during the acceleration period of rapid filling compared to mid-ventricular (0.114 ± 0.007 %/ms) and basal regions (0.088 ± 0.009 %/ms), and apical strain curves plateau at peak mitral inflow velocity. This pattern is reversed during the deceleration period, when the strain-rates in the basal regions are the highest (0.027 ± 0.003 %/ms) due to ongoing basal stretching. A positive base-to-apex gradient in peak pressure difference is observed during acceleration, followed by a negative base-to apex gradient during deceleration. These studies shed insight into the regional volumetric and pressure difference changes in the left ventricle during early diastolic filling. PMID:21630348
Pitch-Learning Algorithm For Speech Encoders
NASA Technical Reports Server (NTRS)
Bhaskar, B. R. Udaya
1988-01-01
Adaptive algorithm detects and corrects errors in sequence of estimates of pitch period of speech. Algorithm operates in conjunction with techniques used to estimate pitch period. Used in such parametric and hybrid speech coders as linear predictive coders and adaptive predictive coders.
Semi-counterfactual cryptography
NASA Astrophysics Data System (ADS)
Akshata Shenoy, H.; Srikanth, R.; Srinivas, T.
2013-09-01
In counterfactual quantum key distribution (QKD), two remote parties can securely share random polarization-encoded bits through the blocking rather than the transmission of particles. We propose a semi-counterfactual QKD, i.e., one where the secret bit is shared, and also encoded, based on the blocking or non-blocking of a particle. The scheme is thus semi-counterfactual and not based on polarization encoding. As with other counterfactual schemes and the Goldenberg-Vaidman protocol, but unlike BB84, the encoding states are orthogonal and security arises ultimately from single-particle non-locality. Unlike any of them, however, the secret bit generated is maximally indeterminate until the joint action of Alice and Bob. We prove the general security of the protocol, and study the most general photon-number-preserving incoherent attack in detail.
A complexity-scalable software-based MPEG-2 video encoder.
Chen, Guo-bin; Lu, Xin-ning; Wang, Xing-guo; Liu, Ji-lin
2004-05-01
With the development of general-purpose processors (GPP) and video signal processing algorithms, it is possible to implement a software-based real-time video encoder on GPP, and its low cost and easy upgrade attract developers' interests to transfer video encoding from specialized hardware to more flexible software. In this paper, the encoding structure is set up first to support complexity scalability; then a lot of high performance algorithms are used on the key time-consuming modules in coding process; finally, at programming level, processor characteristics are considered to improve data access efficiency and processing parallelism. Other programming methods such as lookup table are adopted to reduce the computational complexity. Simulation results showed that these ideas could not only improve the global performance of video coding, but also provide great flexibility in complexity regulation.
Rapid one-step recombinational cloning
Fu, Changlin; Wehr, Daniel R.; Edwards, Janice; Hauge, Brian
2008-01-01
As an increasing number of genes and open reading frames of unknown function are discovered, expression of the encoded proteins is critical toward establishing function. Accordingly, there is an increased need for highly efficient, high-fidelity methods for directional cloning. Among the available methods, site-specific recombination-based cloning techniques, which eliminate the use of restriction endonucleases and ligase, have been widely used for high-throughput (HTP) procedures. We have developed a recombination cloning method, which uses truncated recombination sites to clone PCR products directly into destination/expression vectors, thereby bypassing the requirement for first producing an entry clone. Cloning efficiencies in excess of 80% are obtained providing a highly efficient method for directional HTP cloning. PMID:18424799
High-volume optical vortex multiplexing and de-multiplexing for free-space optical communication.
Wang, Zhongxi; Zhang, N; Yuan, X-C
2011-01-17
We report an approach to the increase of signal channels in free-space optical communication based on composed optical vortices (OVs). In the encoding process, conventional algorithm employed for the generation of collinearly superimposed OVs is combined with a genetic algorithm to achieve high-volume OV multiplexing. At the receiver end, a novel Dammann vortex grating is used to analyze the multihelix beams with a large number of OVs. We experimentally demonstrate a digitized system which is capable of transmitting and receiving 16 OV channels simultaneously. This system is expected to be compatible with a high-speed OV multiplexing technique, with potentials to extremely high-volume information density in OV communication.
Parallel optoelectronic trinary signed-digit division
NASA Astrophysics Data System (ADS)
Alam, Mohammad S.
1999-03-01
The trinary signed-digit (TSD) number system has been found to be very useful for parallel addition and subtraction of any arbitrary length operands in constant time. Using the TSD addition and multiplication modules as the basic building blocks, we develop an efficient algorithm for performing parallel TSD division in constant time. The proposed division technique uses one TSD subtraction and two TSD multiplication steps. An optoelectronic correlator based architecture is suggested for implementation of the proposed TSD division algorithm, which fully exploits the parallelism and high processing speed of optics. An efficient spatial encoding scheme is used to ensure better utilization of space bandwidth product of the spatial light modulators used in the optoelectronic implementation.
JP3D compressed-domain watermarking of volumetric medical data sets
NASA Astrophysics Data System (ADS)
Ouled Zaid, Azza; Makhloufi, Achraf; Olivier, Christian
2010-01-01
Increasing transmission of medical data across multiple user systems raises concerns for medical image watermarking. Additionaly, the use of volumetric images triggers the need for efficient compression techniques in picture archiving and communication systems (PACS), or telemedicine applications. This paper describes an hybrid data hiding/compression system, adapted to volumetric medical imaging. The central contribution is to integrate blind watermarking, based on turbo trellis-coded quantization (TCQ), to JP3D encoder. Results of our method applied to Magnetic Resonance (MR) and Computed Tomography (CT) medical images have shown that our watermarking scheme is robust to JP3D compression attacks and can provide relative high data embedding rate whereas keep a relative lower distortion.
Asgarian, Farzad; Sodagar, Amir M
2009-01-01
A novel noncoherent BPSK demodulator is presented for inductively powered biomedical devices. Differential Manchester encoding technique is used and data demodulation is based on pulse width measurement method. In addition to ultra low power consumption, high data rate without increasing the carrier frequency is achieved with the outstanding data-rate-to-carrier-frequency ratio of 100%. The proposed demodulator is especially appropriate for biomedical applications where high speed data transfer is required, e.g., cochlear implants and visual prostheses. The circuit is designed in a 0.18-mum standard CMOS technology and consumes as low as 232 microW@1.8V at a data rate of 10 Mbps.
Neutral Details Associated with Emotional Events are Encoded: Evidence from a Cued Recall Paradigm
Steinmetz, Katherine R. Mickley; Knight, Aubrey G.; Kensinger, Elizabeth A.
2015-01-01
Enhanced emotional memory often comes at the cost of memory for surrounding background information. Narrowed-encoding theories suggest that this is due to narrowed attention for emotional information at encoding, leading to impaired encoding of background information. Recent work has suggested that an encoding-based theory may be insufficient. Here, we examined whether cued recall – instead of previously used recognition memory tasks - would reveal evidence that non-emotional information associated with emotional information was effectively encoded. Participants encoded positive, negative, or neutral objects on neutral backgrounds. At retrieval, they were given either the item or the background as a memory cue and were asked to recall the associated scene element. Counter to narrowed-encoding theories, emotional items were more likely than neutral items to trigger recall of the associated background. This finding suggests that there is a memory trace of this contextual information and that emotional cues may facilitate retrieval of this information. PMID:26220708
Semantics-informed geological maps: Conceptual modeling and knowledge encoding
NASA Astrophysics Data System (ADS)
Lombardo, Vincenzo; Piana, Fabrizio; Mimmo, Dario
2018-07-01
This paper introduces a novel, semantics-informed geologic mapping process, whose application domain is the production of a synthetic geologic map of a large administrative region. A number of approaches concerning the expression of geologic knowledge through UML schemata and ontologies have been around for more than a decade. These approaches have yielded resources that concern specific domains, such as, e.g., lithology. We develop a conceptual model that aims at building a digital encoding of several domains of geologic knowledge, in order to support the interoperability of the sources. We apply the devised terminological base to the classification of the elements of a geologic map of the Italian Western Alps and northern Apennines (Piemonte region). The digitally encoded knowledge base is a merged set of ontologies, called OntoGeonous. The encoding process identifies the objects of the semantic encoding, the geologic units, gathers the relevant information about such objects from authoritative resources, such as GeoSciML (giving priority to the application schemata reported in the INSPIRE Encoding Cookbook), and expresses the statements by means of axioms encoded in the Web Ontology Language (OWL). To support interoperability, OntoGeonous interlinks the general concepts by referring to the upper part level of ontology SWEET (developed by NASA), and imports knowledge that is already encoded in ontological format (e.g., ontology Simple Lithology). Machine-readable knowledge allows for consistency checking and for classification of the geological map data through algorithms of automatic reasoning.
A Physics-Based Deep Learning Approach to Shadow Invariant Representations of Hyperspectral Images.
Windrim, Lloyd; Ramakrishnan, Rishi; Melkumyan, Arman; Murphy, Richard J
2018-02-01
This paper proposes the Relit Spectral Angle-Stacked Autoencoder, a novel unsupervised feature learning approach for mapping pixel reflectances to illumination invariant encodings. This work extends the Spectral Angle-Stacked Autoencoder so that it can learn a shadow-invariant mapping. The method is inspired by a deep learning technique, Denoising Autoencoders, with the incorporation of a physics-based model for illumination such that the algorithm learns a shadow invariant mapping without the need for any labelled training data, additional sensors, a priori knowledge of the scene or the assumption of Planckian illumination. The method is evaluated using datasets captured from several different cameras, with experiments to demonstrate the illumination invariance of the features and how they can be used practically to improve the performance of high-level perception algorithms that operate on images acquired outdoors.
Experiments in encoding multilevel images as quadtrees
NASA Technical Reports Server (NTRS)
Lansing, Donald L.
1987-01-01
Image storage requirements for several encoding methods are investigated and the use of quadtrees with multigray level or multicolor images are explored. The results of encoding a variety of images having up to 256 gray levels using three schemes (full raster, runlength and quadtree) are presented. Although there is considerable literature on the use of quadtrees to store and manipulate binary images, their application to multilevel images is relatively undeveloped. The potential advantage of quadtree encoding is that an entire area with a uniform gray level may be encoded as a unit. A pointerless quadtree encoding scheme is described. Data are presented on the size of the quadtree required to encode selected images and on the relative storage requirements of the three encoding schemes. A segmentation scheme based on the statistical variation of gray levels within a quadtree quadrant is described. This parametric scheme may be used to control the storage required by an encoded image and to preprocess a scene for feature identification. Several sets of black and white and pseudocolor images obtained by varying the segmentation parameter are shown.
GenInfoGuard--a robust and distortion-free watermarking technique for genetic data.
Iftikhar, Saman; Khan, Sharifullah; Anwar, Zahid; Kamran, Muhammad
2015-01-01
Genetic data, in digital format, is used in different biological phenomena such as DNA translation, mRNA transcription and protein synthesis. The accuracy of these biological phenomena depend on genetic codes and all subsequent processes. To computerize the biological procedures, different domain experts are provided with the authorized access of the genetic codes; as a consequence, the ownership protection of such data is inevitable. For this purpose, watermarks serve as the proof of ownership of data. While protecting data, embedded hidden messages (watermarks) influence the genetic data; therefore, the accurate execution of the relevant processes and the overall result becomes questionable. Most of the DNA based watermarking techniques modify the genetic data and are therefore vulnerable to information loss. Distortion-free techniques make sure that no modifications occur during watermarking; however, they are fragile to malicious attacks and therefore cannot be used for ownership protection (particularly, in presence of a threat model). Therefore, there is a need for a technique that must be robust and should also prevent unwanted modifications. In this spirit, a watermarking technique with aforementioned characteristics has been proposed in this paper. The proposed technique makes sure that: (i) the ownership rights are protected by means of a robust watermark; and (ii) the integrity of genetic data is preserved. The proposed technique-GenInfoGuard-ensures its robustness through the "watermark encoding" in permuted values, and exhibits high decoding accuracy against various malicious attacks.
Miri, Andrew; Daie, Kayvon; Burdine, Rebecca D.; Aksay, Emre
2011-01-01
The advent of methods for optical imaging of large-scale neural activity at cellular resolution in behaving animals presents the problem of identifying behavior-encoding cells within the resulting image time series. Rapid and precise identification of cells with particular neural encoding would facilitate targeted activity measurements and perturbations useful in characterizing the operating principles of neural circuits. Here we report a regression-based approach to semiautomatically identify neurons that is based on the correlation of fluorescence time series with quantitative measurements of behavior. The approach is illustrated with a novel preparation allowing synchronous eye tracking and two-photon laser scanning fluorescence imaging of calcium changes in populations of hindbrain neurons during spontaneous eye movement in the larval zebrafish. Putative velocity-to-position oculomotor integrator neurons were identified that showed a broad spatial distribution and diversity of encoding. Optical identification of integrator neurons was confirmed with targeted loose-patch electrical recording and laser ablation. The general regression-based approach we demonstrate should be widely applicable to calcium imaging time series in behaving animals. PMID:21084686