Optimal block cosine transform image coding for noisy channels
NASA Technical Reports Server (NTRS)
Vaishampayan, V.; Farvardin, N.
1986-01-01
The two dimensional block transform coding scheme based on the discrete cosine transform was studied extensively for image coding applications. While this scheme has proven to be efficient in the absence of channel errors, its performance degrades rapidly over noisy channels. A method is presented for the joint source channel coding optimization of a scheme based on the 2-D block cosine transform when the output of the encoder is to be transmitted via a memoryless design of the quantizers used for encoding the transform coefficients. This algorithm produces a set of locally optimum quantizers and the corresponding binary code assignment for the assumed transform coefficient statistics. To determine the optimum bit assignment among the transform coefficients, an algorithm was used based on the steepest descent method, which under certain convexity conditions on the performance of the channel optimized quantizers, yields the optimal bit allocation. Comprehensive simulation results for the performance of this locally optimum system over noisy channels were obtained and appropriate comparisons against a reference system designed for no channel error were rendered.
NASA Astrophysics Data System (ADS)
Yahampath, Pradeepa
2017-12-01
Consider communicating a correlated Gaussian source over a Rayleigh fading channel with no knowledge of the channel signal-to-noise ratio (CSNR) at the transmitter. In this case, a digital system cannot be optimal for a range of CSNRs. Analog transmission however is optimal at all CSNRs, if the source and channel are memoryless and bandwidth matched. This paper presents new hybrid digital-analog (HDA) systems for sources with memory and channels with bandwidth expansion, which outperform both digital-only and analog-only systems over a wide range of CSNRs. The digital part is either a predictive quantizer or a transform code, used to achieve a coding gain. Analog part uses linear encoding to transmit the quantization error which improves the performance under CSNR variations. The hybrid encoder is optimized to achieve the minimum AMMSE (average minimum mean square error) over the CSNR distribution. To this end, analytical expressions are derived for the AMMSE of asymptotically optimal systems. It is shown that the outage CSNR of the channel code and the analog-digital power allocation must be jointly optimized to achieve the minimum AMMSE. In the case of HDA predictive quantization, a simple algorithm is presented to solve the optimization problem. Experimental results are presented for both Gauss-Markov sources and speech signals.
Quantum coding with finite resources.
Tomamichel, Marco; Berta, Mario; Renes, Joseph M
2016-05-09
The quantum capacity of a memoryless channel determines the maximal rate at which we can communicate reliably over asymptotically many uses of the channel. Here we illustrate that this asymptotic characterization is insufficient in practical scenarios where decoherence severely limits our ability to manipulate large quantum systems in the encoder and decoder. In practical settings, we should instead focus on the optimal trade-off between three parameters: the rate of the code, the size of the quantum devices at the encoder and decoder, and the fidelity of the transmission. We find approximate and exact characterizations of this trade-off for various channels of interest, including dephasing, depolarizing and erasure channels. In each case, the trade-off is parameterized by the capacity and a second channel parameter, the quantum channel dispersion. In the process, we develop several bounds that are valid for general quantum channels and can be computed for small instances.
Quantum coding with finite resources
Tomamichel, Marco; Berta, Mario; Renes, Joseph M.
2016-01-01
The quantum capacity of a memoryless channel determines the maximal rate at which we can communicate reliably over asymptotically many uses of the channel. Here we illustrate that this asymptotic characterization is insufficient in practical scenarios where decoherence severely limits our ability to manipulate large quantum systems in the encoder and decoder. In practical settings, we should instead focus on the optimal trade-off between three parameters: the rate of the code, the size of the quantum devices at the encoder and decoder, and the fidelity of the transmission. We find approximate and exact characterizations of this trade-off for various channels of interest, including dephasing, depolarizing and erasure channels. In each case, the trade-off is parameterized by the capacity and a second channel parameter, the quantum channel dispersion. In the process, we develop several bounds that are valid for general quantum channels and can be computed for small instances. PMID:27156995
NASA Astrophysics Data System (ADS)
Ba, Seydou N.; Waheed, Khurram; Zhou, G. Tong
2010-12-01
Digital predistortion is an effective means to compensate for the nonlinear effects of a memoryless system. In case of a cellular transmitter, a digital baseband predistorter can mitigate the undesirable nonlinear effects along the signal chain, particularly the nonlinear impairments in the radiofrequency (RF) amplifiers. To be practically feasible, the implementation complexity of the predistorter must be minimized so that it becomes a cost-effective solution for the resource-limited wireless handset. This paper proposes optimizations that facilitate the design of a low-cost high-performance adaptive digital baseband predistorter for memoryless systems. A comparative performance analysis of the amplitude and power lookup table (LUT) indexing schemes is presented. An optimized low-complexity amplitude approximation and its hardware synthesis results are also studied. An efficient LUT predistorter training algorithm that combines the fast convergence speed of the normalized least mean squares (NLMSs) with a small hardware footprint is proposed. Results of fixed-point simulations based on the measured nonlinear characteristics of an RF amplifier are presented.
Magnetic resonance image compression using scalar-vector quantization
NASA Astrophysics Data System (ADS)
Mohsenian, Nader; Shahri, Homayoun
1995-12-01
A new coding scheme based on the scalar-vector quantizer (SVQ) is developed for compression of medical images. SVQ is a fixed-rate encoder and its rate-distortion performance is close to that of optimal entropy-constrained scalar quantizers (ECSQs) for memoryless sources. The use of a fixed-rate quantizer is expected to eliminate some of the complexity issues of using variable-length scalar quantizers. When transmission of images over noisy channels is considered, our coding scheme does not suffer from error propagation which is typical of coding schemes which use variable-length codes. For a set of magnetic resonance (MR) images, coding results obtained from SVQ and ECSQ at low bit-rates are indistinguishable. Furthermore, our encoded images are perceptually indistinguishable from the original, when displayed on a monitor. This makes our SVQ based coder an attractive compression scheme for picture archiving and communication systems (PACS), currently under consideration for an all digital radiology environment in hospitals, where reliable transmission, storage, and high fidelity reconstruction of images are desired.
Spectral analysis of variable-length coded digital signals
NASA Astrophysics Data System (ADS)
Cariolaro, G. L.; Pierobon, G. L.; Pupolin, S. G.
1982-05-01
A spectral analysis is conducted for a variable-length word sequence by an encoder driven by a stationary memoryless source. A finite-state sequential machine is considered as a model of the line encoder, and the spectral analysis of the encoded message is performed under the assumption that the sourceword sequence is composed of independent identically distributed words. Closed form expressions for both the continuous and discrete parts of the spectral density are derived in terms of the encoder law and sourceword statistics. The jump part exhibits jumps at multiple integers of per lambda(sub 0)T, where lambda(sub 0) is the greatest common divisor of the possible codeword lengths, and T is the symbol period. The derivation of the continuous part can be conveniently factorized, and the theory is applied to the spectral analysis of BnZS and HDBn codes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demianowicz, Maciej; Horodecki, Pawel
We analyze different aspects of multiparty communication over quantum memoryless channels and generalize some of the key results known from bipartite channels to the multiparty scenario. In particular, we introduce multiparty versions of subspace and entanglement transmission fidelities. We also provide alternative, local, versions of fidelities and show their equivalence to the global ones in context of capacity regions defined. An equivalence of two different capacity notions with respect to two types of fidelities is proven. In analogy to the bipartite case it is shown, via sufficiency of isometric encoding theorem, that additional classical forward side channel does not increasemore » capacity region of any quantum channel with k senders and m receivers which represents a compact unit of general quantum networks theory. The result proves that recently provided capacity region of a multiple access channel [M. Horodecki et al., Nature 436, 673 (2005); J. Yard et al., e-print quant-ph/0501045], is optimal also in a scenario of an additional support of forward classical communication.« less
Efficient Polar Coding of Quantum Information
NASA Astrophysics Data System (ADS)
Renes, Joseph M.; Dupuis, Frédéric; Renner, Renato
2012-08-01
Polar coding, introduced 2008 by Arıkan, is the first (very) efficiently encodable and decodable coding scheme whose information transmission rate provably achieves the Shannon bound for classical discrete memoryless channels in the asymptotic limit of large block sizes. Here, we study the use of polar codes for the transmission of quantum information. Focusing on the case of qubit Pauli channels and qubit erasure channels, we use classical polar codes to construct a coding scheme that asymptotically achieves a net transmission rate equal to the coherent information using efficient encoding and decoding operations and code construction. Our codes generally require preshared entanglement between sender and receiver, but for channels with a sufficiently low noise level we demonstrate that the rate of preshared entanglement required is zero.
Capacity and optimal collusion attack channels for Gaussian fingerprinting games
NASA Astrophysics Data System (ADS)
Wang, Ying; Moulin, Pierre
2007-02-01
In content fingerprinting, the same media covertext - image, video, audio, or text - is distributed to many users. A fingerprint, a mark unique to each user, is embedded into each copy of the distributed covertext. In a collusion attack, two or more users may combine their copies in an attempt to "remove" their fingerprints and forge a pirated copy. To trace the forgery back to members of the coalition, we need fingerprinting codes that can reliably identify the fingerprints of those members. Researchers have been focusing on designing or testing fingerprints for Gaussian host signals and the mean square error (MSE) distortion under some classes of collusion attacks, in terms of the detector's error probability in detecting collusion members. For example, under the assumptions of Gaussian fingerprints and Gaussian attacks (the fingerprinted signals are averaged and then the result is passed through a Gaussian test channel), Moulin and Briassouli1 derived optimal strategies in a game-theoretic framework that uses the detector's error probability as the performance measure for a binary decision problem (whether a user participates in the collusion attack or not); Stone2 and Zhao et al. 3 studied average and other non-linear collusion attacks for Gaussian-like fingerprints; Wang et al. 4 stated that the average collusion attack is the most efficient one for orthogonal fingerprints; Kiyavash and Moulin 5 derived a mathematical proof of the optimality of the average collusion attack under some assumptions. In this paper, we also consider Gaussian cover signals, the MSE distortion, and memoryless collusion attacks. We do not make any assumption about the fingerprinting codes used other than an embedding distortion constraint. Also, our only assumptions about the attack channel are an expected distortion constraint, a memoryless constraint, and a fairness constraint. That is, the colluders are allowed to use any arbitrary nonlinear strategy subject to the above constraints. Under those constraints on the fingerprint embedder and the colluders, fingerprinting capacity is obtained as the solution of a mutual-information game involving probability density functions (pdf's) designed by the embedder and the colluders. We show that the optimal fingerprinting strategy is a Gaussian test channel where the fingerprinted signal is the sum of an attenuated version of the cover signal plus a Gaussian information-bearing noise, and the optimal collusion strategy is to average fingerprinted signals possessed by all the colluders and pass the averaged copy through a Gaussian test channel. The capacity result and the optimal strategies are the same for both the private and public games. In the former scenario, the original covertext is available to the decoder, while in the latter setup, the original covertext is available to the encoder but not to the decoder.
On optimal soft-decision demodulation. [in digital communication system
NASA Technical Reports Server (NTRS)
Lee, L.-N.
1976-01-01
A necessary condition is derived for optimal J-ary coherent demodulation of M-ary (M greater than 2) signals. Optimality is defined as maximality of the symmetric cutoff rate of the resulting discrete memoryless channel. Using a counterexample, it is shown that the condition derived is generally not sufficient for optimality. This condition is employed as the basis for an iterative optimization method to find the optimal demodulator decision regions from an initial 'good guess'. In general, these regions are found to be bounded by hyperplanes in likelihood space; the corresponding regions in signal space are found to have hyperplane asymptotes for the important case of additive white Gaussian noise. Some examples are presented, showing that the regions in signal space bounded by these asymptotic hyperplanes define demodulator decision regions that are virtually optimal.
A note on the R sub 0-parameter for discrete memoryless channels
NASA Technical Reports Server (NTRS)
Mceliece, R. J.
1980-01-01
An explicit class of discrete memoryless channels (q-ary erasure channels) is exhibited. Practical and explicit coded systems of rate R with R/R sub o as large as desired can be designed for this class.
Direct and reverse secret-key capacities of a quantum channel.
Pirandola, Stefano; García-Patrón, Raul; Braunstein, Samuel L; Lloyd, Seth
2009-02-06
We define the direct and reverse secret-key capacities of a memoryless quantum channel as the optimal rates that entanglement-based quantum-key-distribution protocols can reach by using a single forward classical communication (direct reconciliation) or a single feedback classical communication (reverse reconciliation). In particular, the reverse secret-key capacity can be positive for antidegradable channels, where no forward strategy is known to be secure. This property is explicitly shown in the continuous variable framework by considering arbitrary one-mode Gaussian channels.
On the Probability of Error and Stochastic Resonance in Discrete Memoryless Channels
2013-12-01
Information - Driven Doppler Shift Estimation and Compensation Methods for Underwater Wireless Sensor Networks ”, which is to analyze and develop... underwater wireless sensor networks . We formulated an analytic relationship that relates the average probability of error to the systems parameters, the...thesis, we studied the performance of Discrete Memoryless Channels (DMC), arising in the context of cooperative underwater wireless sensor networks
An Optimized Integrator Windup Protection Technique Applied to a Turbofan Engine Control
NASA Technical Reports Server (NTRS)
Watts, Stephen R.; Garg, Sanjay
1995-01-01
This paper introduces a new technique for providing memoryless integrator windup protection which utilizes readily available optimization software tools. This integrator windup protection synthesis provides a concise methodology for creating integrator windup protection for each actuation system loop independently while assuring both controller and closed loop system stability. The individual actuation system loops' integrator windup protection can then be combined to provide integrator windup protection for the entire system. This technique is applied to an H(exp infinity) based multivariable control designed for a linear model of an advanced afterburning turbofan engine. The resulting transient characteristics are examined for the integrated system while encountering single and multiple actuation limits.
Composite quantum collision models
NASA Astrophysics Data System (ADS)
Lorenzo, Salvatore; Ciccarello, Francesco; Palma, G. Massimo
2017-09-01
A collision model (CM) is a framework to describe open quantum dynamics. In its memoryless version, it models the reservoir R as consisting of a large collection of elementary ancillas: the dynamics of the open system S results from successive collisions of S with the ancillas of R . Here, we present a general formulation of memoryless composite CMs, where S is partitioned into the very open system under study S coupled to one or more auxiliary systems {Si} . Their composite dynamics occurs through internal S -{Si} collisions interspersed with external ones involving {Si} and the reservoir R . We show that important known instances of quantum non-Markovian dynamics of S —such as the emission of an atom into a reservoir featuring a Lorentzian, or multi-Lorentzian, spectral density or a qubit subject to random telegraph noise—can be mapped on to such memoryless composite CMs.
On optimal soft-decision demodulation
NASA Technical Reports Server (NTRS)
Lee, L. N.
1975-01-01
Wozencraft and Kennedy have suggested that the appropriate demodulator criterion of goodness is the cut-off rate of the discrete memoryless channel created by the modulation system; the criterion of goodness adopted in this note is the symmetric cut-off rate which differs from the former criterion only in that the signals are assumed equally likely. Massey's necessary condition for optimal demodulation of binary signals is generalized to M-ary signals. It is shown that the optimal demodulator decision regions in likelihood space are bounded by hyperplanes. An iterative method is formulated for finding these optimal decision regions from an initial good quess. For additive white Gaussian noise, the corresponding optimal decision regions in signal space are bounded by hypersurfaces with hyperplane asymptotes; these asymptotes themselves bound the decision regions of a demodulator which, in several examples, is shown to be virtually optimal. In many cases, the necessary condition for demodulator optimality is also sufficient, but a counter example to its general sufficiency is given.
Memoryless cooperative graph search based on the simulated annealing algorithm
NASA Astrophysics Data System (ADS)
Hou, Jian; Yan, Gang-Feng; Fan, Zhen
2011-04-01
We have studied the problem of reaching a globally optimal segment for a graph-like environment with a single or a group of autonomous mobile agents. Firstly, two efficient simulated-annealing-like algorithms are given for a single agent to solve the problem in a partially known environment and an unknown environment, respectively. It shows that under both proposed control strategies, the agent will eventually converge to a globally optimal segment with probability 1. Secondly, we use multi-agent searching to simultaneously reduce the computation complexity and accelerate convergence based on the algorithms we have given for a single agent. By exploiting graph partition, a gossip-consensus method based scheme is presented to update the key parameter—radius of the graph, ensuring that the agents spend much less time finding a globally optimal segment.
Protograph-Based Raptor-Like Codes
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Chen, Tsung-Yi; Wang, Jiadong; Wesel, Richard D.
2014-01-01
Theoretical analysis has long indicated that feedback improves the error exponent but not the capacity of pointto- point memoryless channels. The analytic and empirical results indicate that at short blocklength regime, practical rate-compatible punctured convolutional (RCPC) codes achieve low latency with the use of noiseless feedback. In 3GPP, standard rate-compatible turbo codes (RCPT) did not outperform the convolutional codes in the short blocklength regime. The reason is the convolutional codes for low number of states can be decoded optimally using Viterbi decoder. Despite excellent performance of convolutional codes at very short blocklengths, the strength of convolutional codes does not scale with the blocklength for a fixed number of states in its trellis.
Device independence for two-party cryptography and position verification with memoryless devices
NASA Astrophysics Data System (ADS)
Ribeiro, Jérémy; Thinh, Le Phuc; Kaniewski, Jedrzej; Helsen, Jonas; Wehner, Stephanie
2018-06-01
Quantum communication has demonstrated its usefulness for quantum cryptography far beyond quantum key distribution. One domain is two-party cryptography, whose goal is to allow two parties who may not trust each other to solve joint tasks. Another interesting application is position-based cryptography whose goal is to use the geographical location of an entity as its only identifying credential. Unfortunately, security of these protocols is not possible against an all powerful adversary. However, if we impose some realistic physical constraints on the adversary, there exist protocols for which security can be proven, but these so far relied on the knowledge of the quantum operations performed during the protocols. In this work we improve the device-independent security proofs of Kaniewski and Wehner [New J. Phys. 18, 055004 (2016), 10.1088/1367-2630/18/5/055004] for two-party cryptography (with memoryless devices) and we add a security proof for device-independent position verification (also memoryless devices) under different physical constraints on the adversary. We assess the quality of the devices by observing a Bell violation, and, as for Kaniewski and Wehner [New J. Phys. 18, 055004 (2016), 10.1088/1367-2630/18/5/055004], security can be attained for any violation of the Clauser-Holt-Shimony-Horne inequality.
Singer, Philipp; Helic, Denis; Taraghi, Behnam; Strohmaier, Markus
2014-01-01
One of the most frequently used models for understanding human navigation on the Web is the Markov chain model, where Web pages are represented as states and hyperlinks as probabilities of navigating from one page to another. Predominantly, human navigation on the Web has been thought to satisfy the memoryless Markov property stating that the next page a user visits only depends on her current page and not on previously visited ones. This idea has found its way in numerous applications such as Google's PageRank algorithm and others. Recently, new studies suggested that human navigation may better be modeled using higher order Markov chain models, i.e., the next page depends on a longer history of past clicks. Yet, this finding is preliminary and does not account for the higher complexity of higher order Markov chain models which is why the memoryless model is still widely used. In this work we thoroughly present a diverse array of advanced inference methods for determining the appropriate Markov chain order. We highlight strengths and weaknesses of each method and apply them for investigating memory and structure of human navigation on the Web. Our experiments reveal that the complexity of higher order models grows faster than their utility, and thus we confirm that the memoryless model represents a quite practical model for human navigation on a page level. However, when we expand our analysis to a topical level, where we abstract away from specific page transitions to transitions between topics, we find that the memoryless assumption is violated and specific regularities can be observed. We report results from experiments with two types of navigational datasets (goal-oriented vs. free form) and observe interesting structural differences that make a strong argument for more contextual studies of human navigation in future work.
Singer, Philipp; Helic, Denis; Taraghi, Behnam; Strohmaier, Markus
2014-01-01
One of the most frequently used models for understanding human navigation on the Web is the Markov chain model, where Web pages are represented as states and hyperlinks as probabilities of navigating from one page to another. Predominantly, human navigation on the Web has been thought to satisfy the memoryless Markov property stating that the next page a user visits only depends on her current page and not on previously visited ones. This idea has found its way in numerous applications such as Google's PageRank algorithm and others. Recently, new studies suggested that human navigation may better be modeled using higher order Markov chain models, i.e., the next page depends on a longer history of past clicks. Yet, this finding is preliminary and does not account for the higher complexity of higher order Markov chain models which is why the memoryless model is still widely used. In this work we thoroughly present a diverse array of advanced inference methods for determining the appropriate Markov chain order. We highlight strengths and weaknesses of each method and apply them for investigating memory and structure of human navigation on the Web. Our experiments reveal that the complexity of higher order models grows faster than their utility, and thus we confirm that the memoryless model represents a quite practical model for human navigation on a page level. However, when we expand our analysis to a topical level, where we abstract away from specific page transitions to transitions between topics, we find that the memoryless assumption is violated and specific regularities can be observed. We report results from experiments with two types of navigational datasets (goal-oriented vs. free form) and observe interesting structural differences that make a strong argument for more contextual studies of human navigation in future work. PMID:25013937
NASA Technical Reports Server (NTRS)
Ancheta, T. C., Jr.
1976-01-01
A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.
Second-Order Asymptotics for the Classical Capacity of Image-Additive Quantum Channels
NASA Astrophysics Data System (ADS)
Tomamichel, Marco; Tan, Vincent Y. F.
2015-08-01
We study non-asymptotic fundamental limits for transmitting classical information over memoryless quantum channels, i.e. we investigate the amount of classical information that can be transmitted when a quantum channel is used a finite number of times and a fixed, non-vanishing average error is permissible. In this work we consider the classical capacity of quantum channels that are image-additive, including all classical to quantum channels, as well as the product state capacity of arbitrary quantum channels. In both cases we show that the non-asymptotic fundamental limit admits a second-order approximation that illustrates the speed at which the rate of optimal codes converges to the Holevo capacity as the blocklength tends to infinity. The behavior is governed by a new channel parameter, called channel dispersion, for which we provide a geometrical interpretation.
Hardware-efficient implementation of digital FIR filter using fast first-order moment algorithm
NASA Astrophysics Data System (ADS)
Cao, Li; Liu, Jianguo; Xiong, Jun; Zhang, Jing
2018-03-01
As the digital finite impulse response (FIR) filter can be transformed into the shift-add form of multiple small-sized firstorder moments, based on the existing fast first-order moment algorithm, this paper presents a novel multiplier-less structure to calculate any number of sequential filtering results in parallel. The theoretical analysis on its hardware and time-complexities reveals that by appropriately setting the degree of parallelism and the decomposition factor of a fixed word width, the proposed structure may achieve better area-time efficiency than the existing two-dimensional (2-D) memoryless-based filter. To evaluate the performance concretely, the proposed designs for different taps along with the existing 2-D memoryless-based filters, are synthesized by Synopsys Design Compiler with 0.18-μm SMIC library. The comparisons show that the proposed design has less area-time complexity and power consumption when the number of filter taps is larger than 48.
2012-05-01
noise (AGN) [1] and [11]. We focus on threshold communication systems due to the underwater environment, noncoherent communication techniques are...the threshold level. In the context of the underwater communications, where noncoherent communication techniques are affected both by noise and
Partial Ordering and Stochastic Resonance in Discrete Memoryless Channels
2012-05-01
Methods for Underwater Wireless Sensor Networks”, which is to analyze and develop noncoherent communication methods at the physical layer for target...Capacity Behavior for Simple Models of Optical Fiber Communication,” 8 th International conf. on Communications, COMM 2010, Bucharest, pp.1-6, July 2010
Taghanaki, Saeid Asgari; Kawahara, Jeremy; Miles, Brandon; Hamarneh, Ghassan
2017-07-01
Feature reduction is an essential stage in computer aided breast cancer diagnosis systems. Multilayer neural networks can be trained to extract relevant features by encoding high-dimensional data into low-dimensional codes. Optimizing traditional auto-encoders works well only if the initial weights are close to a proper solution. They are also trained to only reduce the mean squared reconstruction error (MRE) between the encoder inputs and the decoder outputs, but do not address the classification error. The goal of the current work is to test the hypothesis that extending traditional auto-encoders (which only minimize reconstruction error) to multi-objective optimization for finding Pareto-optimal solutions provides more discriminative features that will improve classification performance when compared to single-objective and other multi-objective approaches (i.e. scalarized and sequential). In this paper, we introduce a novel multi-objective optimization of deep auto-encoder networks, in which the auto-encoder optimizes two objectives: MRE and mean classification error (MCE) for Pareto-optimal solutions, rather than just MRE. These two objectives are optimized simultaneously by a non-dominated sorting genetic algorithm. We tested our method on 949 X-ray mammograms categorized into 12 classes. The results show that the features identified by the proposed algorithm allow a classification accuracy of up to 98.45%, demonstrating favourable accuracy over the results of state-of-the-art methods reported in the literature. We conclude that adding the classification objective to the traditional auto-encoder objective and optimizing for finding Pareto-optimal solutions, using evolutionary multi-objective optimization, results in producing more discriminative features. Copyright © 2017 Elsevier B.V. All rights reserved.
System Lifetimes, The Memoryless Property, Euler's Constant, and Pi
ERIC Educational Resources Information Center
Agarwal, Anurag; Marengo, James E.; Romero, Likin Simon
2013-01-01
A "k"-out-of-"n" system functions as long as at least "k" of its "n" components remain operational. Assuming that component failure times are independent and identically distributed exponential random variables, we find the distribution of system failure time. After some examples, we find the limiting…
Research on Optimization of Encoding Algorithm of PDF417 Barcodes
NASA Astrophysics Data System (ADS)
Sun, Ming; Fu, Longsheng; Han, Shuqing
The purpose of this research is to develop software to optimize the data compression of a PDF417 barcode using VC++6.0. According to the different compression mode and the particularities of Chinese, the relevant approaches which optimize the encoding algorithm of data compression such as spillage and the Chinese characters encoding are proposed, a simple approach to compute complex polynomial is introduced. After the whole data compression is finished, the number of the codeword is reduced and then the encoding algorithm is optimized. The developed encoding system of PDF 417 barcodes will be applied in the logistics management of fruits, therefore also will promote the fast development of the two-dimensional bar codes.
Full glowworm swarm optimization algorithm for whole-set orders scheduling in single machine.
Yu, Zhang; Yang, Xiaomei
2013-01-01
By analyzing the characteristics of whole-set orders problem and combining the theory of glowworm swarm optimization, a new glowworm swarm optimization algorithm for scheduling is proposed. A new hybrid-encoding schema combining with two-dimensional encoding and random-key encoding is given. In order to enhance the capability of optimal searching and speed up the convergence rate, the dynamical changed step strategy is integrated into this algorithm. Furthermore, experimental results prove its feasibility and efficiency.
Modular architecture for robotics and teleoperation
Anderson, Robert J.
1996-12-03
Systems and methods for modularization and discretization of real-time robot, telerobot and teleoperation systems using passive, network based control laws. Modules consist of network one-ports and two-ports. Wave variables and position information are passed between modules. The behavior of each module is decomposed into uncoupled linear-time-invariant, and coupled, nonlinear memoryless elements and then are separately discretized.
ERIC Educational Resources Information Center
Fazio, Frank; Moser, Gene W.
A probabilistic model (see SE 013 578) describing information processing during the cognitive tasks of recall and problem solving was tested, refined, and developed by testing graduate students on a number of tasks which combined oral, written, and overt "input" and "output" modes in several ways. In a verbal chain one subject…
Landscape Encodings Enhance Optimization
Klemm, Konstantin; Mehta, Anita; Stadler, Peter F.
2012-01-01
Hard combinatorial optimization problems deal with the search for the minimum cost solutions (ground states) of discrete systems under strong constraints. A transformation of state variables may enhance computational tractability. It has been argued that these state encodings are to be chosen invertible to retain the original size of the state space. Here we show how redundant non-invertible encodings enhance optimization by enriching the density of low-energy states. In addition, smooth landscapes may be established on encoded state spaces to guide local search dynamics towards the ground state. PMID:22496860
Distillation of secret-key from a class of compound memoryless quantum sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boche, H., E-mail: boche@tum.de; Janßen, G., E-mail: gisbert.janssen@tum.de
We consider secret-key distillation from tripartite compound classical-quantum-quantum (cqq) sources with free forward public communication under strong security criterion. We design protocols which are universally reliable and secure in this scenario. These are shown to achieve asymptotically optimal rates as long as a certain regularity condition is fulfilled by the set of its generating density matrices. We derive a multi-letter formula which describes the optimal forward secret-key capacity for all compound cqq sources being regular in this sense. We also determine the forward secret-key distillation capacity for situations where the legitimate sending party has perfect knowledge of his/her marginal statemore » deriving from the source statistics. In this case regularity conditions can be dropped. Our results show that the capacities with and without the mentioned kind of state knowledge are equal as long as the source is generated by a regular set of density matrices. We demonstrate that regularity of cqq sources is not only a technical but also an operational issue. For this reason, we give an example of a source which has zero secret-key distillation capacity without sender knowledge, while achieving positive rates is possible if sender marginal knowledge is provided.« less
Modeling Rare Baseball Events--Are They Memoryless?
ERIC Educational Resources Information Center
Huber, Michael; Glen, Andrew
2007-01-01
Three sets of rare baseball events--pitching a no-hit game, hitting for the cycle, and turning a triple play--offer excellent examples of events whose occurrence may be modeled as Poisson processes. That is, the time of occurrence of one of these events doesn't affect when we see the next occurrence of such. We modeled occurrences of these three…
Statistical modeling of storm-level Kp occurrences
Remick, K.J.; Love, J.J.
2006-01-01
We consider the statistical modeling of the occurrence in time of large Kp magnetic storms as a Poisson process, testing whether or not relatively rare, large Kp events can be considered to arise from a stochastic, sequential, and memoryless process. For a Poisson process, the wait times between successive events occur statistically with an exponential density function. Fitting an exponential function to the durations between successive large Kp events forms the basis of our analysis. Defining these wait times by calculating the differences between times when Kp exceeds a certain value, such as Kp ??? 5, we find the wait-time distribution is not exponential. Because large storms often have several periods with large Kp values, their occurrence in time is not memoryless; short duration wait times are not independent of each other and are often clumped together in time. If we remove same-storm large Kp occurrences, the resulting wait times are very nearly exponentially distributed and the storm arrival process can be characterized as Poisson. Fittings are performed on wait time data for Kp ??? 5, 6, 7, and 8. The mean wait times between storms exceeding such Kp thresholds are 7.12, 16.55, 42.22, and 121.40 days respectively.
A new optimized GA-RBF neural network algorithm.
Jia, Weikuan; Zhao, Dean; Shen, Tian; Su, Chunyang; Hu, Chanli; Zhao, Yuyan
2014-01-01
When confronting the complex problems, radial basis function (RBF) neural network has the advantages of adaptive and self-learning ability, but it is difficult to determine the number of hidden layer neurons, and the weights learning ability from hidden layer to the output layer is low; these deficiencies easily lead to decreasing learning ability and recognition precision. Aiming at this problem, we propose a new optimized RBF neural network algorithm based on genetic algorithm (GA-RBF algorithm), which uses genetic algorithm to optimize the weights and structure of RBF neural network; it chooses new ways of hybrid encoding and optimizing simultaneously. Using the binary encoding encodes the number of the hidden layer's neurons and using real encoding encodes the connection weights. Hidden layer neurons number and connection weights are optimized simultaneously in the new algorithm. However, the connection weights optimization is not complete; we need to use least mean square (LMS) algorithm for further leaning, and finally get a new algorithm model. Using two UCI standard data sets to test the new algorithm, the results show that the new algorithm improves the operating efficiency in dealing with complex problems and also improves the recognition precision, which proves that the new algorithm is valid.
Evaluating true BCI communication rate through mutual information and language models.
Speier, William; Arnold, Corey; Pouratian, Nader
2013-01-01
Brain-computer interface (BCI) systems are a promising means for restoring communication to patients suffering from "locked-in" syndrome. Research to improve system performance primarily focuses on means to overcome the low signal to noise ratio of electroencephalogric (EEG) recordings. However, the literature and methods are difficult to compare due to the array of evaluation metrics and assumptions underlying them, including that: 1) all characters are equally probable, 2) character selection is memoryless, and 3) errors occur completely at random. The standardization of evaluation metrics that more accurately reflect the amount of information contained in BCI language output is critical to make progress. We present a mutual information-based metric that incorporates prior information and a model of systematic errors. The parameters of a system used in one study were re-optimized, showing that the metric used in optimization significantly affects the parameter values chosen and the resulting system performance. The results of 11 BCI communication studies were then evaluated using different metrics, including those previously used in BCI literature and the newly advocated metric. Six studies' results varied based on the metric used for evaluation and the proposed metric produced results that differed from those originally published in two of the studies. Standardizing metrics to accurately reflect the rate of information transmission is critical to properly evaluate and compare BCI communication systems and advance the field in an unbiased manner.
Unification of quantum information theory
NASA Astrophysics Data System (ADS)
Abeyesinghe, Anura
We present the unification of many previously disparate results in noisy quantum Shannon theory and the unification of all of noiseless quantum Shannon theory. More specifically we deal here with bipartite, unidirectional, and memoryless quantum Shannon theory. We find all the optimal protocols and quantify the relationship between the resources used, both for the one-shot and for the ensemble case, for what is arguably the most fundamental task in quantum information theory: sharing entangled states between a sender and a receiver. We find that all of these protocols are derived from our one-shot superdense coding protocol and relate nicely to each other. We then move on to noisy quantum information theory and give a simple, direct proof of the "mother" protocol, or rather her generalization to the Fully Quantum Slepian-Wolf protocol (FQSW). FQSW simultaneously accomplishes two goals: quantum communication-assisted entanglement distillation, and state transfer from the sender to the receiver. As a result, in addition to her other "children," the mother protocol generates the state merging primitive of Horodecki, Oppenheim, and Winter as well as a new class of distributed compression protocols for correlated quantum sources, which are optimal for sources described by separable density operators. Moreover, the mother protocol described here is easily transformed into the so-called "father" protocol, demonstrating that the division of single-sender/single-receiver protocols into two families was unnecessary: all protocols in the family are children of the mother.
Non-Gaussian and Multivariate Noise Models for Signal Detection.
1982-09-01
follow, some of the basic results of asymptotic "theory are presented. both to make the notation clear. and to give some i ~ background for the...densities are considered within a detection framework. The discussions include specific examples and also some general methods of density generation ...densities generated by a memoryless, nonlinear transformation of a correlated, Gaussian source is discussed in some detail. A member of this class has the
1980-11-26
and J.B. Thomas, "The Effect of a Memoryless Nonlinearity on the Spectrum of a Random Process," IEEE Transactions on Information Theory, Vol. IT-23, pp...Density Function from Measurements Corrupted by Poisson Noise," IEEE Transactions on Information Theory, Vol. IT-23, pp. 764-766, November 1977. H. Derin...pp. 243-249, December 1977. G.L. Wise and N.C. Gallagher, "On Spherically Invariant Random Processes," IEEE Transactions on Information Theory, Vol. IT
Leroch, Michaela; Mernke, Dennis; Koppenhoefer, Dieter; Schneider, Prisca; Mosbach, Andreas; Doehlemann, Gunther; Hahn, Matthias
2011-05-01
The green fluorescent protein (GFP) and its variants have been widely used in modern biology as reporters that allow a variety of live-cell imaging techniques. So far, GFP has rarely been used in the gray mold fungus Botrytis cinerea because of low fluorescence intensity. The codon usage of B. cinerea genes strongly deviates from that of commonly used GFP-encoding genes and reveals a lower GC content than other fungi. In this study, we report the development and use of a codon-optimized version of the B. cinerea enhanced GFP (eGFP)-encoding gene (Bcgfp) for improved expression in B. cinerea. Both the codon optimization and, to a smaller extent, the insertion of an intron resulted in higher mRNA levels and increased fluorescence. Bcgfp was used for localization of nuclei in germinating spores and for visualizing host penetration. We further demonstrate the use of promoter-Bcgfp fusions for quantitative evaluation of various toxic compounds as inducers of the atrB gene encoding an ABC-type drug efflux transporter of B. cinerea. In addition, a codon-optimized mCherry-encoding gene was constructed which yielded bright red fluorescence in B. cinerea.
Memory-assisted quantum key distribution resilient against multiple-excitation effects
NASA Astrophysics Data System (ADS)
Lo Piparo, Nicolò; Sinclair, Neil; Razavi, Mohsen
2018-01-01
Memory-assisted measurement-device-independent quantum key distribution (MA-MDI-QKD) has recently been proposed as a technique to improve the rate-versus-distance behavior of QKD systems by using existing, or nearly-achievable, quantum technologies. The promise is that MA-MDI-QKD would require less demanding quantum memories than the ones needed for probabilistic quantum repeaters. Nevertheless, early investigations suggest that, in order to beat the conventional memory-less QKD schemes, the quantum memories used in the MA-MDI-QKD protocols must have high bandwidth-storage products and short interaction times. Among different types of quantum memories, ensemble-based memories offer some of the required specifications, but they typically suffer from multiple excitation effects. To avoid the latter issue, in this paper, we propose two new variants of MA-MDI-QKD both relying on single-photon sources for entangling purposes. One is based on known techniques for entanglement distribution in quantum repeaters. This scheme turns out to offer no advantage even if one uses ideal single-photon sources. By finding the root cause of the problem, we then propose another setup, which can outperform single memory-less setups even if we allow for some imperfections in our single-photon sources. For such a scheme, we compare the key rate for different types of ensemble-based memories and show that certain classes of atomic ensembles can improve the rate-versus-distance behavior.
Solving traveling salesman problems with DNA molecules encoding numerical values.
Lee, Ji Youn; Shin, Soo-Yong; Park, Tai Hyun; Zhang, Byoung-Tak
2004-12-01
We introduce a DNA encoding method to represent numerical values and a biased molecular algorithm based on the thermodynamic properties of DNA. DNA strands are designed to encode real values by variation of their melting temperatures. The thermodynamic properties of DNA are used for effective local search of optimal solutions using biochemical techniques, such as denaturation temperature gradient polymerase chain reaction and temperature gradient gel electrophoresis. The proposed method was successfully applied to the traveling salesman problem, an instance of optimization problems on weighted graphs. This work extends the capability of DNA computing to solving numerical optimization problems, which is contrasted with other DNA computing methods focusing on logical problem solving.
Namboodiri, Vijay Mohan K; Levy, Joshua M; Mihalas, Stefan; Sims, David W; Hussain Shuler, Marshall G
2016-08-02
Understanding the exploration patterns of foragers in the wild provides fundamental insight into animal behavior. Recent experimental evidence has demonstrated that path lengths (distances between consecutive turns) taken by foragers are well fitted by a power law distribution. Numerous theoretical contributions have posited that "Lévy random walks"-which can produce power law path length distributions-are optimal for memoryless agents searching a sparse reward landscape. It is unclear, however, whether such a strategy is efficient for cognitively complex agents, from wild animals to humans. Here, we developed a model to explain the emergence of apparent power law path length distributions in animals that can learn about their environments. In our model, the agent's goal during search is to build an internal model of the distribution of rewards in space that takes into account the cost of time to reach distant locations (i.e., temporally discounting rewards). For an agent with such a goal, we find that an optimal model of exploration in fact produces hyperbolic path lengths, which are well approximated by power laws. We then provide support for our model by showing that humans in a laboratory spatial exploration task search space systematically and modify their search patterns under a cost of time. In addition, we find that path length distributions in a large dataset obtained from free-ranging marine vertebrates are well described by our hyperbolic model. Thus, we provide a general theoretical framework for understanding spatial exploration patterns of cognitively complex foragers.
Gautestad, Arild O; Mysterud, Atle
2013-01-01
The Lévy flight foraging hypothesis predicts a transition from scale-free Lévy walk (LW) to scale-specific Brownian motion (BM) as an animal moves from resource-poor towards resource-rich environment. However, the LW-BM continuum implies a premise of memory-less search, which contradicts the cognitive capacity of vertebrates. We describe methods to test if apparent support for LW-BM transitions may rather be a statistical artifact from movement under varying intensity of site fidelity. A higher frequency of returns to previously visited patches (stronger site fidelity) may erroneously be interpreted as a switch from LW towards BM. Simulations of scale-free, memory-enhanced space use illustrate how the ratio between return events and scale-free exploratory movement translates to varying strength of site fidelity. An expanded analysis of GPS data of 18 female red deer, Cervus elaphus, strengthens previous empirical support of memory-enhanced and scale-free space use in a northern forest ecosystem. A statistical mechanical model architecture that describes foraging under environment-dependent variation of site fidelity may allow for higher realism of optimal search models and movement ecology in general, in particular for vertebrates with high cognitive capacity.
Multicore-based 3D-DWT video encoder
NASA Astrophysics Data System (ADS)
Galiano, Vicente; López-Granado, Otoniel; Malumbres, Manuel P.; Migallón, Hector
2013-12-01
Three-dimensional wavelet transform (3D-DWT) encoders are good candidates for applications like professional video editing, video surveillance, multi-spectral satellite imaging, etc. where a frame must be reconstructed as quickly as possible. In this paper, we present a new 3D-DWT video encoder based on a fast run-length coding engine. Furthermore, we present several multicore optimizations to speed-up the 3D-DWT computation. An exhaustive evaluation of the proposed encoder (3D-GOP-RL) has been performed, and we have compared the evaluation results with other video encoders in terms of rate/distortion (R/D), coding/decoding delay, and memory consumption. Results show that the proposed encoder obtains good R/D results for high-resolution video sequences with nearly in-place computation using only the memory needed to store a group of pictures. After applying the multicore optimization strategies over the 3D DWT, the proposed encoder is able to compress a full high-definition video sequence in real-time.
Aerodynamic Shape Optimization Using A Real-Number-Encoded Genetic Algorithm
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.
2001-01-01
A new method for aerodynamic shape optimization using a genetic algorithm with real number encoding is presented. The algorithm is used to optimize three different problems, a simple hill climbing problem, a quasi-one-dimensional nozzle problem using an Euler equation solver and a three-dimensional transonic wing problem using a nonlinear potential solver. Results indicate that the genetic algorithm is easy to implement and extremely reliable, being relatively insensitive to design space noise.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nouidui, Thierry; Wetter, Michael
SimulatorToFMU is a software package written in Python which allows users to export a memoryless Python-driven simulation program or script as a Functional Mock-up Unit (FMU) for model exchange or co-simulation.In CyDER (Cyber Physical Co-simulation Platform for Distributed Energy Resources in Smart Grids), SimulatorToFMU will allow exporting OPAL-RT as an FMU. This will enable OPAL-RT to be linked to CYMDIST and GridDyn FMUs through a standardized open source interface.
Syndrome source coding and its universal generalization
NASA Technical Reports Server (NTRS)
Ancheta, T. C., Jr.
1975-01-01
A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A universal generalization of syndrome-source-coding is formulated which provides robustly-effective, distortionless, coding of source ensembles.
Optimized atom position and coefficient coding for matching pursuit-based image compression.
Shoa, Alireza; Shirani, Shahram
2009-12-01
In this paper, we propose a new encoding algorithm for matching pursuit image coding. We show that coding performance is improved when correlations between atom positions and atom coefficients are both used in encoding. We find the optimum tradeoff between efficient atom position coding and efficient atom coefficient coding and optimize the encoder parameters. Our proposed algorithm outperforms the existing coding algorithms designed for matching pursuit image coding. Additionally, we show that our algorithm results in better rate distortion performance than JPEG 2000 at low bit rates.
Information fusion based techniques for HEVC
NASA Astrophysics Data System (ADS)
Fernández, D. G.; Del Barrio, A. A.; Botella, Guillermo; Meyer-Baese, Uwe; Meyer-Baese, Anke; Grecos, Christos
2017-05-01
Aiming at the conflict circumstances of multi-parameter H.265/HEVC encoder system, the present paper introduces the analysis of many optimizations' set in order to improve the trade-off between quality, performance and power consumption for different reliable and accurate applications. This method is based on the Pareto optimization and has been tested with different resolutions on real-time encoders.
Propeller performance analysis and multidisciplinary optimization using a genetic algorithm
NASA Astrophysics Data System (ADS)
Burger, Christoph
A propeller performance analysis program has been developed and integrated into a Genetic Algorithm for design optimization. The design tool will produce optimal propeller geometries for a given goal, which includes performance and/or acoustic signature. A vortex lattice model is used for the propeller performance analysis and a subsonic compact source model is used for the acoustic signature determination. Compressibility effects are taken into account with the implementation of Prandtl-Glauert domain stretching. Viscous effects are considered with a simple Reynolds number based model to account for the effects of viscosity in the spanwise direction. An empirical flow separation model developed from experimental lift and drag coefficient data of a NACA 0012 airfoil is included. The propeller geometry is generated using a recently introduced Class/Shape function methodology to allow for efficient use of a wide design space. Optimizing the angle of attack, the chord, the sweep and the local airfoil sections, produced blades with favorable tradeoffs between single and multiple point optimizations of propeller performance and acoustic noise signatures. Optimizations using a binary encoded IMPROVE(c) Genetic Algorithm (GA) and a real encoded GA were obtained after optimization runs with some premature convergence. The newly developed real encoded GA was used to obtain the majority of the results which produced generally better convergence characteristics when compared to the binary encoded GA. The optimization trade-offs show that single point optimized propellers have favorable performance, but circulation distributions were less smooth when compared to dual point or multiobjective optimizations. Some of the single point optimizations generated propellers with proplets which show a loading shift to the blade tip region. When noise is included into the objective functions some propellers indicate a circulation shift to the inboard sections of the propeller as well as a reduction in propeller diameter. In addition the propeller number was increased in some optimizations to reduce the acoustic blade signature.
Performance evaluation of matrix gradient coils.
Jia, Feng; Schultz, Gerrit; Testud, Frederik; Welz, Anna Masako; Weber, Hans; Littin, Sebastian; Yu, Huijun; Hennig, Jürgen; Zaitsev, Maxim
2016-02-01
In this paper, we present a new performance measure of a matrix coil (also known as multi-coil) from the perspective of efficient, local, non-linear encoding without explicitly considering target encoding fields. An optimization problem based on a joint optimization for the non-linear encoding fields is formulated. Based on the derived objective function, a figure of merit of a matrix coil is defined, which is a generalization of a previously known resistive figure of merit for traditional gradient coils. A cylindrical matrix coil design with a high number of elements is used to illustrate the proposed performance measure. The results are analyzed to reveal novel features of matrix coil designs, which allowed us to optimize coil parameters, such as number of coil elements. A comparison to a scaled, existing multi-coil is also provided to demonstrate the use of the proposed performance parameter. The assessment of a matrix gradient coil profits from using a single performance parameter that takes the local encoding performance of the coil into account in relation to the dissipated power.
Joint-layer encoder optimization for HEVC scalable extensions
NASA Astrophysics Data System (ADS)
Tsai, Chia-Ming; He, Yuwen; Dong, Jie; Ye, Yan; Xiu, Xiaoyu; He, Yong
2014-09-01
Scalable video coding provides an efficient solution to support video playback on heterogeneous devices with various channel conditions in heterogeneous networks. SHVC is the latest scalable video coding standard based on the HEVC standard. To improve enhancement layer coding efficiency, inter-layer prediction including texture and motion information generated from the base layer is used for enhancement layer coding. However, the overall performance of the SHVC reference encoder is not fully optimized because rate-distortion optimization (RDO) processes in the base and enhancement layers are independently considered. It is difficult to directly extend the existing joint-layer optimization methods to SHVC due to the complicated coding tree block splitting decisions and in-loop filtering process (e.g., deblocking and sample adaptive offset (SAO) filtering) in HEVC. To solve those problems, a joint-layer optimization method is proposed by adjusting the quantization parameter (QP) to optimally allocate the bit resource between layers. Furthermore, to make more proper resource allocation, the proposed method also considers the viewing probability of base and enhancement layers according to packet loss rate. Based on the viewing probability, a novel joint-layer RD cost function is proposed for joint-layer RDO encoding. The QP values of those coding tree units (CTUs) belonging to lower layers referenced by higher layers are decreased accordingly, and the QP values of those remaining CTUs are increased to keep total bits unchanged. Finally the QP values with minimal joint-layer RD cost are selected to match the viewing probability. The proposed method was applied to the third temporal level (TL-3) pictures in the Random Access configuration. Simulation results demonstrate that the proposed joint-layer optimization method can improve coding performance by 1.3% for these TL-3 pictures compared to the SHVC reference encoder without joint-layer optimization.
Optimization of topological quantum algorithms using Lattice Surgery is hard
NASA Astrophysics Data System (ADS)
Herr, Daniel; Nori, Franco; Devitt, Simon
The traditional method for computation in the surface code or the Raussendorf model is the creation of holes or ''defects'' within the encoded lattice of qubits which are manipulated via topological braiding to enact logic gates. However, this is not the only way to achieve universal, fault-tolerant computation. In this work we turn attention to the Lattice Surgery representation, which realizes encoded logic operations without destroying the intrinsic 2D nearest-neighbor interactions sufficient for braided based logic and achieves universality without using defects for encoding information. In both braided and lattice surgery logic there are open questions regarding the compilation and resource optimization of quantum circuits. Optimization in braid-based logic is proving to be difficult to define and the classical complexity associated with this problem has yet to be determined. In the context of lattice surgery based logic, we can introduce an optimality condition, which corresponds to a circuit with lowest amount of physical qubit requirements, and prove that the complexity of optimizing the geometric (lattice surgery) representation of a quantum circuit is NP-hard.
Encoder-Decoder Optimization for Brain-Computer Interfaces
Merel, Josh; Pianto, Donald M.; Cunningham, John P.; Paninski, Liam
2015-01-01
Neuroprosthetic brain-computer interfaces are systems that decode neural activity into useful control signals for effectors, such as a cursor on a computer screen. It has long been recognized that both the user and decoding system can adapt to increase the accuracy of the end effector. Co-adaptation is the process whereby a user learns to control the system in conjunction with the decoder adapting to learn the user's neural patterns. We provide a mathematical framework for co-adaptation and relate co-adaptation to the joint optimization of the user's control scheme ("encoding model") and the decoding algorithm's parameters. When the assumptions of that framework are respected, co-adaptation cannot yield better performance than that obtainable by an optimal initial choice of fixed decoder, coupled with optimal user learning. For a specific case, we provide numerical methods to obtain such an optimized decoder. We demonstrate our approach in a model brain-computer interface system using an online prosthesis simulator, a simple human-in-the-loop pyschophysics setup which provides a non-invasive simulation of the BCI setting. These experiments support two claims: that users can learn encoders matched to fixed, optimal decoders and that, once learned, our approach yields expected performance advantages. PMID:26029919
Encoder-decoder optimization for brain-computer interfaces.
Merel, Josh; Pianto, Donald M; Cunningham, John P; Paninski, Liam
2015-06-01
Neuroprosthetic brain-computer interfaces are systems that decode neural activity into useful control signals for effectors, such as a cursor on a computer screen. It has long been recognized that both the user and decoding system can adapt to increase the accuracy of the end effector. Co-adaptation is the process whereby a user learns to control the system in conjunction with the decoder adapting to learn the user's neural patterns. We provide a mathematical framework for co-adaptation and relate co-adaptation to the joint optimization of the user's control scheme ("encoding model") and the decoding algorithm's parameters. When the assumptions of that framework are respected, co-adaptation cannot yield better performance than that obtainable by an optimal initial choice of fixed decoder, coupled with optimal user learning. For a specific case, we provide numerical methods to obtain such an optimized decoder. We demonstrate our approach in a model brain-computer interface system using an online prosthesis simulator, a simple human-in-the-loop pyschophysics setup which provides a non-invasive simulation of the BCI setting. These experiments support two claims: that users can learn encoders matched to fixed, optimal decoders and that, once learned, our approach yields expected performance advantages.
Nonlinear inversion of potential-field data using a hybrid-encoding genetic algorithm
Chen, C.; Xia, J.; Liu, J.; Feng, G.
2006-01-01
Using a genetic algorithm to solve an inverse problem of complex nonlinear geophysical equations is advantageous because it does not require computer gradients of models or "good" initial models. The multi-point search of a genetic algorithm makes it easier to find the globally optimal solution while avoiding falling into a local extremum. As is the case in other optimization approaches, the search efficiency for a genetic algorithm is vital in finding desired solutions successfully in a multi-dimensional model space. A binary-encoding genetic algorithm is hardly ever used to resolve an optimization problem such as a simple geophysical inversion with only three unknowns. The encoding mechanism, genetic operators, and population size of the genetic algorithm greatly affect search processes in the evolution. It is clear that improved operators and proper population size promote the convergence. Nevertheless, not all genetic operations perform perfectly while searching under either a uniform binary or a decimal encoding system. With the binary encoding mechanism, the crossover scheme may produce more new individuals than with the decimal encoding. On the other hand, the mutation scheme in a decimal encoding system will create new genes larger in scope than those in the binary encoding. This paper discusses approaches of exploiting the search potential of genetic operations in the two encoding systems and presents an approach with a hybrid-encoding mechanism, multi-point crossover, and dynamic population size for geophysical inversion. We present a method that is based on the routine in which the mutation operation is conducted in the decimal code and multi-point crossover operation in the binary code. The mix-encoding algorithm is called the hybrid-encoding genetic algorithm (HEGA). HEGA provides better genes with a higher probability by a mutation operator and improves genetic algorithms in resolving complicated geophysical inverse problems. Another significant result is that final solution is determined by the average model derived from multiple trials instead of one computation due to the randomness in a genetic algorithm procedure. These advantages were demonstrated by synthetic and real-world examples of inversion of potential-field data. ?? 2005 Elsevier Ltd. All rights reserved.
Leveraging Environmental Correlations: The Thermodynamics of Requisite Variety
NASA Astrophysics Data System (ADS)
Boyd, Alexander B.; Mandal, Dibyendu; Crutchfield, James P.
2017-06-01
Key to biological success, the requisite variety that confronts an adaptive organism is the set of detectable, accessible, and controllable states in its environment. We analyze its role in the thermodynamic functioning of information ratchets—a form of autonomous Maxwellian Demon capable of exploiting fluctuations in an external information reservoir to harvest useful work from a thermal bath. This establishes a quantitative paradigm for understanding how adaptive agents leverage structured thermal environments for their own thermodynamic benefit. General ratchets behave as memoryful communication channels, interacting with their environment sequentially and storing results to an output. The bulk of thermal ratchets analyzed to date, however, assume memoryless environments that generate input signals without temporal correlations. Employing computational mechanics and a new information-processing Second Law of Thermodynamics (IPSL) we remove these restrictions, analyzing general finite-state ratchets interacting with structured environments that generate correlated input signals. On the one hand, we demonstrate that a ratchet need not have memory to exploit an uncorrelated environment. On the other, and more appropriate to biological adaptation, we show that a ratchet must have memory to most effectively leverage structure and correlation in its environment. The lesson is that to optimally harvest work a ratchet's memory must reflect the input generator's memory. Finally, we investigate achieving the IPSL bounds on the amount of work a ratchet can extract from its environment, discovering that finite-state, optimal ratchets are unable to reach these bounds. In contrast, we show that infinite-state ratchets can go well beyond these bounds by utilizing their own infinite "negentropy". We conclude with an outline of the collective thermodynamics of information-ratchet swarms.
Namboodiri, Vijay Mohan K.; Levy, Joshua M.; Mihalas, Stefan; Sims, David W.; Hussain Shuler, Marshall G.
2016-01-01
Understanding the exploration patterns of foragers in the wild provides fundamental insight into animal behavior. Recent experimental evidence has demonstrated that path lengths (distances between consecutive turns) taken by foragers are well fitted by a power law distribution. Numerous theoretical contributions have posited that “Lévy random walks”—which can produce power law path length distributions—are optimal for memoryless agents searching a sparse reward landscape. It is unclear, however, whether such a strategy is efficient for cognitively complex agents, from wild animals to humans. Here, we developed a model to explain the emergence of apparent power law path length distributions in animals that can learn about their environments. In our model, the agent’s goal during search is to build an internal model of the distribution of rewards in space that takes into account the cost of time to reach distant locations (i.e., temporally discounting rewards). For an agent with such a goal, we find that an optimal model of exploration in fact produces hyperbolic path lengths, which are well approximated by power laws. We then provide support for our model by showing that humans in a laboratory spatial exploration task search space systematically and modify their search patterns under a cost of time. In addition, we find that path length distributions in a large dataset obtained from free-ranging marine vertebrates are well described by our hyperbolic model. Thus, we provide a general theoretical framework for understanding spatial exploration patterns of cognitively complex foragers. PMID:27385831
Maggi, Claudio; Paoluzzi, Matteo; Angelani, Luca; Di Leonardo, Roberto
2017-12-14
We investigate experimentally and numerically the stochastic dynamics and the time-dependent response of colloids subject to a small external perturbation in a dense bath of motile E. coli bacteria. The external field is a magnetic field acting on a superparamagnetic microbead suspended in an active medium. The measured linear response reveals an instantaneous friction kernel despite the complexity of the bacterial bath. By comparing the mean squared displacement and the response function we detect a clear violation of the fluctuation dissipation theorem.
General mechanism for the 1 /f noise
NASA Astrophysics Data System (ADS)
Yadav, Avinash Chand; Ramaswamy, Ramakrishna; Dhar, Deepak
2017-08-01
We consider the response of a memoryless nonlinear device that acts instantaneously, converting an input signal ξ (t ) into an output η (t ) at the same time t . For input Gaussian noise with power-spectrum 1 /fα , the nonlinearity can modify the spectral index of the output to give a spectrum that varies as 1 /fα ' with α'≠α . We show that the value of α' depends on the nonlinear transformation and can be tuned continuously. This provides a general mechanism for the ubiquitous 1 /f noise found in nature.
Feedback-tuned, noise resilient gates for encoded spin qubits
NASA Astrophysics Data System (ADS)
Bluhm, Hendrik
Spin 1/2 particles form native two level systems and thus lend themselves as a natural qubit implementation. However, encoding a single qubit in several spins entails benefits, such as reducing the resources necessary for qubit control and protection from certain decoherence channels. While several varieties of such encoded spin qubits have been implemented, accurate control remains challenging, and leakage out of the subspace of valid qubit states is a potential issue. Optimal performance typically requires large pulse amplitudes for fast control, which is prone to systematic errors and prohibits standard control approaches based on Rabi flopping. Furthermore, the exchange interaction typically used to electrically manipulate encoded spin qubits is inherently sensitive to charge noise. I will discuss all-electrical, high-fidelity single qubit operations for a spin qubit encoded in two electrons in a GaAs double quantum dot. Starting from a set of numerically optimized control pulses, we employ an iterative tuning procedure based on measured error syndromes to remove systematic errors.Randomized benchmarking yields an average gate fidelity exceeding 98 % and a leakage rate into invalid states of 0.2 %. These gates exhibit a certain degree of resilience to both slow charge and nuclear spin fluctuations due to dynamical correction analogous to a spin echo. Furthermore, the numerical optimization minimizes the impact of fast charge noise. Both types of noise make relevant contributions to gate errors. The general approach is also adaptable to other qubit encodings and exchange based two-qubit gates.
Optimal sparse approximation with integrate and fire neurons.
Shapero, Samuel; Zhu, Mengchen; Hasler, Jennifer; Rozell, Christopher
2014-08-01
Sparse approximation is a hypothesized coding strategy where a population of sensory neurons (e.g. V1) encodes a stimulus using as few active neurons as possible. We present the Spiking LCA (locally competitive algorithm), a rate encoded Spiking Neural Network (SNN) of integrate and fire neurons that calculate sparse approximations. The Spiking LCA is designed to be equivalent to the nonspiking LCA, an analog dynamical system that converges on a ℓ(1)-norm sparse approximations exponentially. We show that the firing rate of the Spiking LCA converges on the same solution as the analog LCA, with an error inversely proportional to the sampling time. We simulate in NEURON a network of 128 neuron pairs that encode 8 × 8 pixel image patches, demonstrating that the network converges to nearly optimal encodings within 20 ms of biological time. We also show that when using more biophysically realistic parameters in the neurons, the gain function encourages additional ℓ(0)-norm sparsity in the encoding, relative both to ideal neurons and digital solvers.
Apolipoprotein A-I mutant proteins having cysteine substitutions and polynucleotides encoding same
Oda, Michael N [Benicia, CA; Forte, Trudy M [Berkeley, CA
2007-05-29
Functional Apolipoprotein A-I mutant proteins, having one or more cysteine substitutions and polynucleotides encoding same, can be used to modulate paraoxonase's arylesterase activity. These ApoA-I mutant proteins can be used as therapeutic agents to combat cardiovascular disease, atherosclerosis, acute phase response and other inflammatory related diseases. The invention also includes modifications and optimizations of the ApoA-I nucleotide sequence for purposes of increasing protein expression and optimization.
Guo, Y C; Wang, H; Wu, H P; Zhang, M Q
2015-12-21
Aimed to address the defects of the large mean square error (MSE), and the slow convergence speed in equalizing the multi-modulus signals of the constant modulus algorithm (CMA), a multi-modulus algorithm (MMA) based on global artificial fish swarm (GAFS) intelligent optimization of DNA encoding sequences (GAFS-DNA-MMA) was proposed. To improve the convergence rate and reduce the MSE, this proposed algorithm adopted an encoding method based on DNA nucleotide chains to provide a possible solution to the problem. Furthermore, the GAFS algorithm, with its fast convergence and global search ability, was used to find the best sequence. The real and imaginary parts of the initial optimal weight vector of MMA were obtained through DNA coding of the best sequence. The simulation results show that the proposed algorithm has a faster convergence speed and smaller MSE in comparison with the CMA, the MMA, and the AFS-DNA-MMA.
Parallel reduced-instruction-set-computer architecture for real-time symbolic pattern matching
NASA Astrophysics Data System (ADS)
Parson, Dale E.
1991-03-01
This report discusses ongoing work on a parallel reduced-instruction- set-computer (RISC) architecture for automatic production matching. The PRIOPS compiler takes advantage of the memoryless character of automatic processing by translating a program's collection of automatic production tests into an equivalent combinational circuit-a digital circuit without memory, whose outputs are immediate functions of its inputs. The circuit provides a highly parallel, fine-grain model of automatic matching. The compiler then maps the combinational circuit onto RISC hardware. The heart of the processor is an array of comparators capable of testing production conditions in parallel, Each comparator attaches to private memory that contains virtual circuit nodes-records of the current state of nodes and busses in the combinational circuit. All comparator memories hold identical information, allowing simultaneous update for a single changing circuit node and simultaneous retrieval of different circuit nodes by different comparators. Along with the comparator-based logic unit is a sequencer that determines the current combination of production-derived comparisons to try, based on the combined success and failure of previous combinations of comparisons. The memoryless nature of automatic matching allows the compiler to designate invariant memory addresses for virtual circuit nodes, and to generate the most effective sequences of comparison test combinations. The result is maximal utilization of parallel hardware, indicating speed increases and scalability beyond that found for course-grain, multiprocessor approaches to concurrent Rete matching. Future work will consider application of this RISC architecture to the standard (controlled) Rete algorithm, where search through memory dominates portions of matching.
Optimal attacks on qubit-based Quantum Key Recycling
NASA Astrophysics Data System (ADS)
Leermakers, Daan; Škorić, Boris
2018-03-01
Quantum Key Recycling (QKR) is a quantum cryptographic primitive that allows one to reuse keys in an unconditionally secure way. By removing the need to repeatedly generate new keys, it improves communication efficiency. Škorić and de Vries recently proposed a QKR scheme based on 8-state encoding (four bases). It does not require quantum computers for encryption/decryption but only single-qubit operations. We provide a missing ingredient in the security analysis of this scheme in the case of noisy channels: accurate upper bounds on the required amount of privacy amplification. We determine optimal attacks against the message and against the key, for 8-state encoding as well as 4-state and 6-state conjugate coding. We provide results in terms of min-entropy loss as well as accessible (Shannon) information. We show that the Shannon entropy analysis for 8-state encoding reduces to the analysis of quantum key distribution, whereas 4-state and 6-state suffer from additional leaks that make them less effective. From the optimal attacks we compute the required amount of privacy amplification and hence the achievable communication rate (useful information per qubit) of qubit-based QKR. Overall, 8-state encoding yields the highest communication rates.
Design of sparse Halbach magnet arrays for portable MRI using a genetic algorithm.
Cooley, Clarissa Zimmerman; Haskell, Melissa W; Cauley, Stephen F; Sappo, Charlotte; Lapierre, Cristen D; Ha, Christopher G; Stockmann, Jason P; Wald, Lawrence L
2018-01-01
Permanent magnet arrays offer several attributes attractive for the development of a low-cost portable MRI scanner for brain imaging. They offer the potential for a relatively lightweight, low to mid-field system with no cryogenics, a small fringe field, and no electrical power requirements or heat dissipation needs. The cylindrical Halbach array, however, requires external shimming or mechanical adjustments to produce B 0 fields with standard MRI homogeneity levels (e.g., 0.1 ppm over FOV), particularly when constrained or truncated geometries are needed, such as a head-only magnet where the magnet length is constrained by the shoulders. For portable scanners using rotation of the magnet for spatial encoding with generalized projections, the spatial pattern of the field is important since it acts as the encoding field. In either a static or rotating magnet, it will be important to be able to optimize the field pattern of cylindrical Halbach arrays in a way that retains construction simplicity. To achieve this, we present a method for designing an optimized cylindrical Halbach magnet using the genetic algorithm to achieve either homogeneity (for standard MRI applications) or a favorable spatial encoding field pattern (for rotational spatial encoding applications). We compare the chosen designs against a standard, fully populated sparse Halbach design, and evaluate optimized spatial encoding fields using point-spread-function and image simulations. We validate the calculations by comparing to the measured field of a constructed magnet. The experimentally implemented design produced fields in good agreement with the predicted fields, and the genetic algorithm was successful in improving the chosen metrics. For the uniform target field, an order of magnitude homogeneity improvement was achieved compared to the un-optimized, fully populated design. For the rotational encoding design the resolution uniformity is improved by 95% compared to a uniformly populated design.
Aliotta, Eric; Moulin, Kévin; Ennis, Daniel B
2018-02-01
To design and evaluate eddy current-nulled convex optimized diffusion encoding (EN-CODE) gradient waveforms for efficient diffusion tensor imaging (DTI) that is free of eddy current-induced image distortions. The EN-CODE framework was used to generate diffusion-encoding waveforms that are eddy current-compensated. The EN-CODE DTI waveform was compared with the existing eddy current-nulled twice refocused spin echo (TRSE) sequence as well as monopolar (MONO) and non-eddy current-compensated CODE in terms of echo time (TE) and image distortions. Comparisons were made in simulations, phantom experiments, and neuro imaging in 10 healthy volunteers. The EN-CODE sequence achieved eddy current compensation with a significantly shorter TE than TRSE (78 versus 96 ms) and a slightly shorter TE than MONO (78 versus 80 ms). Intravoxel signal variance was lower in phantoms with EN-CODE than with MONO (13.6 ± 11.6 versus 37.4 ± 25.8) and not different from TRSE (15.1 ± 11.6), indicating good robustness to eddy current-induced image distortions. Mean fractional anisotropy values in brain edges were also significantly lower with EN-CODE than with MONO (0.16 ± 0.01 versus 0.24 ± 0.02, P < 1 x 10 -5 ) and not different from TRSE (0.16 ± 0.01 versus 0.16 ± 0.01, P = nonsignificant). The EN-CODE sequence eliminated eddy current-induced image distortions in DTI with a TE comparable to MONO and substantially shorter than TRSE. Magn Reson Med 79:663-672, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Kurtosis Approach for Nonlinear Blind Source Separation
NASA Technical Reports Server (NTRS)
Duong, Vu A.; Stubbemd, Allen R.
2005-01-01
In this paper, we introduce a new algorithm for blind source signal separation for post-nonlinear mixtures. The mixtures are assumed to be linearly mixed from unknown sources first and then distorted by memoryless nonlinear functions. The nonlinear functions are assumed to be smooth and can be approximated by polynomials. Both the coefficients of the unknown mixing matrix and the coefficients of the approximated polynomials are estimated by the gradient descent method conditional on the higher order statistical requirements. The results of simulation experiments presented in this paper demonstrate the validity and usefulness of our approach for nonlinear blind source signal separation.
NASA Technical Reports Server (NTRS)
Haddad, Wassim M.; Bernstein, Dennis S.
1991-01-01
Lyapunov function proofs of sufficient conditions for asymptotic stability are given for feedback interconnections of bounded real and positive real transfer functions. Two cases are considered: (1) a proper bounded real (resp., positive real) transfer function with a bounded real (resp., positive real) time-varying memoryless nonlinearity; and (2) two strictly proper bounded real (resp., positive real) transfer functions. A similar treatment is given for the circle and Popov theorems. Application of these results to robust stability with time-varying bounded real, positive real, and sector-bounded uncertainty is discussed.
Hussain, Shahid M; De Becker, Jan; Hop, Wim C J; Dwarkasing, Soendersing; Wielopolski, Piotr A
2005-03-01
To optimize and assess the feasibility of a single-shot black-blood T2-weighted spin-echo echo-planar imaging (SSBB-EPI) sequence for MRI of the liver using sensitivity encoding (SENSE), and compare the results with those obtained with a T2-weighted turbo spin-echo (TSE) sequence. Six volunteers and 16 patients were scanned at 1.5T (Philips Intera). In the volunteer study, we optimized the SSBB-EPI sequence by interactively changing the parameters (i.e., the resolution, echo time (TE), diffusion weighting with low b-values, and polarity of the phase-encoding gradient) with regard to distortion, suppression of the blood signal, and sensitivity to motion. The influence of each change was assessed. The optimized SSBB-EPI sequence was applied in patients (N = 16). A number of items, including the overall image quality (on a scale of 1-5), were used for graded evaluation. In addition, the signal-to-noise ratio (SNR) of the liver was calculated. Statistical analysis was carried out with the use of Wilcoxon's signed rank test for comparison of the SSBB-EPI and TSE sequences, with P = 0.05 considered the limit for significance. The SSBB-EPI sequence was improved by the following steps: 1) less frequency points than phase-encoding steps, 2) a b-factor of 20, and 3) a reversed polarity of the phase-encoding gradient. In patients, the mean overall image quality score for the optimized SSBB-EPI (3.5 (range: 1-4)) and TSE (3.6 (range: 3-4)), and the SNR of the liver on SSBB-EPI (mean +/- SD = 7.6 +/- 4.0) and TSE (8.9 +/- 4.6) were not significantly different (P > .05). Optimized SSBB-EPI with SENSE proved to be feasible in patients, and the overall image quality and SNR of the liver were comparable to those achieved with the standard respiratory-triggered T2-weighted TSE sequence. (c) 2005 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Oby, Emily R.; Perel, Sagi; Sadtler, Patrick T.; Ruff, Douglas A.; Mischel, Jessica L.; Montez, David F.; Cohen, Marlene R.; Batista, Aaron P.; Chase, Steven M.
2016-06-01
Objective. A traditional goal of neural recording with extracellular electrodes is to isolate action potential waveforms of an individual neuron. Recently, in brain-computer interfaces (BCIs), it has been recognized that threshold crossing events of the voltage waveform also convey rich information. To date, the threshold for detecting threshold crossings has been selected to preserve single-neuron isolation. However, the optimal threshold for single-neuron identification is not necessarily the optimal threshold for information extraction. Here we introduce a procedure to determine the best threshold for extracting information from extracellular recordings. We apply this procedure in two distinct contexts: the encoding of kinematic parameters from neural activity in primary motor cortex (M1), and visual stimulus parameters from neural activity in primary visual cortex (V1). Approach. We record extracellularly from multi-electrode arrays implanted in M1 or V1 in monkeys. Then, we systematically sweep the voltage detection threshold and quantify the information conveyed by the corresponding threshold crossings. Main Results. The optimal threshold depends on the desired information. In M1, velocity is optimally encoded at higher thresholds than speed; in both cases the optimal thresholds are lower than are typically used in BCI applications. In V1, information about the orientation of a visual stimulus is optimally encoded at higher thresholds than is visual contrast. A conceptual model explains these results as a consequence of cortical topography. Significance. How neural signals are processed impacts the information that can be extracted from them. Both the type and quality of information contained in threshold crossings depend on the threshold setting. There is more information available in these signals than is typically extracted. Adjusting the detection threshold to the parameter of interest in a BCI context should improve our ability to decode motor intent, and thus enhance BCI control. Further, by sweeping the detection threshold, one can gain insights into the topographic organization of the nearby neural tissue.
Oby, Emily R; Perel, Sagi; Sadtler, Patrick T; Ruff, Douglas A; Mischel, Jessica L; Montez, David F; Cohen, Marlene R; Batista, Aaron P; Chase, Steven M
2018-01-01
Objective A traditional goal of neural recording with extracellular electrodes is to isolate action potential waveforms of an individual neuron. Recently, in brain–computer interfaces (BCIs), it has been recognized that threshold crossing events of the voltage waveform also convey rich information. To date, the threshold for detecting threshold crossings has been selected to preserve single-neuron isolation. However, the optimal threshold for single-neuron identification is not necessarily the optimal threshold for information extraction. Here we introduce a procedure to determine the best threshold for extracting information from extracellular recordings. We apply this procedure in two distinct contexts: the encoding of kinematic parameters from neural activity in primary motor cortex (M1), and visual stimulus parameters from neural activity in primary visual cortex (V1). Approach We record extracellularly from multi-electrode arrays implanted in M1 or V1 in monkeys. Then, we systematically sweep the voltage detection threshold and quantify the information conveyed by the corresponding threshold crossings. Main Results The optimal threshold depends on the desired information. In M1, velocity is optimally encoded at higher thresholds than speed; in both cases the optimal thresholds are lower than are typically used in BCI applications. In V1, information about the orientation of a visual stimulus is optimally encoded at higher thresholds than is visual contrast. A conceptual model explains these results as a consequence of cortical topography. Significance How neural signals are processed impacts the information that can be extracted from them. Both the type and quality of information contained in threshold crossings depend on the threshold setting. There is more information available in these signals than is typically extracted. Adjusting the detection threshold to the parameter of interest in a BCI context should improve our ability to decode motor intent, and thus enhance BCI control. Further, by sweeping the detection threshold, one can gain insights into the topographic organization of the nearby neural tissue. PMID:27097901
Oby, Emily R; Perel, Sagi; Sadtler, Patrick T; Ruff, Douglas A; Mischel, Jessica L; Montez, David F; Cohen, Marlene R; Batista, Aaron P; Chase, Steven M
2016-06-01
A traditional goal of neural recording with extracellular electrodes is to isolate action potential waveforms of an individual neuron. Recently, in brain-computer interfaces (BCIs), it has been recognized that threshold crossing events of the voltage waveform also convey rich information. To date, the threshold for detecting threshold crossings has been selected to preserve single-neuron isolation. However, the optimal threshold for single-neuron identification is not necessarily the optimal threshold for information extraction. Here we introduce a procedure to determine the best threshold for extracting information from extracellular recordings. We apply this procedure in two distinct contexts: the encoding of kinematic parameters from neural activity in primary motor cortex (M1), and visual stimulus parameters from neural activity in primary visual cortex (V1). We record extracellularly from multi-electrode arrays implanted in M1 or V1 in monkeys. Then, we systematically sweep the voltage detection threshold and quantify the information conveyed by the corresponding threshold crossings. The optimal threshold depends on the desired information. In M1, velocity is optimally encoded at higher thresholds than speed; in both cases the optimal thresholds are lower than are typically used in BCI applications. In V1, information about the orientation of a visual stimulus is optimally encoded at higher thresholds than is visual contrast. A conceptual model explains these results as a consequence of cortical topography. How neural signals are processed impacts the information that can be extracted from them. Both the type and quality of information contained in threshold crossings depend on the threshold setting. There is more information available in these signals than is typically extracted. Adjusting the detection threshold to the parameter of interest in a BCI context should improve our ability to decode motor intent, and thus enhance BCI control. Further, by sweeping the detection threshold, one can gain insights into the topographic organization of the nearby neural tissue.
A quantum annealing architecture with all-to-all connectivity from local interactions.
Lechner, Wolfgang; Hauke, Philipp; Zoller, Peter
2015-10-01
Quantum annealers are physical devices that aim at solving NP-complete optimization problems by exploiting quantum mechanics. The basic principle of quantum annealing is to encode the optimization problem in Ising interactions between quantum bits (qubits). A fundamental challenge in building a fully programmable quantum annealer is the competing requirements of full controllable all-to-all connectivity and the quasi-locality of the interactions between physical qubits. We present a scalable architecture with full connectivity, which can be implemented with local interactions only. The input of the optimization problem is encoded in local fields acting on an extended set of physical qubits. The output is-in the spirit of topological quantum memories-redundantly encoded in the physical qubits, resulting in an intrinsic fault tolerance. Our model can be understood as a lattice gauge theory, where long-range interactions are mediated by gauge constraints. The architecture can be realized on various platforms with local controllability, including superconducting qubits, NV-centers, quantum dots, and atomic systems.
A quantum annealing architecture with all-to-all connectivity from local interactions
Lechner, Wolfgang; Hauke, Philipp; Zoller, Peter
2015-01-01
Quantum annealers are physical devices that aim at solving NP-complete optimization problems by exploiting quantum mechanics. The basic principle of quantum annealing is to encode the optimization problem in Ising interactions between quantum bits (qubits). A fundamental challenge in building a fully programmable quantum annealer is the competing requirements of full controllable all-to-all connectivity and the quasi-locality of the interactions between physical qubits. We present a scalable architecture with full connectivity, which can be implemented with local interactions only. The input of the optimization problem is encoded in local fields acting on an extended set of physical qubits. The output is—in the spirit of topological quantum memories—redundantly encoded in the physical qubits, resulting in an intrinsic fault tolerance. Our model can be understood as a lattice gauge theory, where long-range interactions are mediated by gauge constraints. The architecture can be realized on various platforms with local controllability, including superconducting qubits, NV-centers, quantum dots, and atomic systems. PMID:26601316
Power-rate-distortion analysis for wireless video communication under energy constraint
NASA Astrophysics Data System (ADS)
He, Zhihai; Liang, Yongfang; Ahmad, Ishfaq
2004-01-01
In video coding and streaming over wireless communication network, the power-demanding video encoding operates on the mobile devices with limited energy supply. To analyze, control, and optimize the rate-distortion (R-D) behavior of the wireless video communication system under the energy constraint, we need to develop a power-rate-distortion (P-R-D) analysis framework, which extends the traditional R-D analysis by including another dimension, the power consumption. Specifically, in this paper, we analyze the encoding mechanism of typical video encoding systems and develop a parametric video encoding architecture which is fully scalable in computational complexity. Using dynamic voltage scaling (DVS), a hardware technology recently developed in CMOS circuits design, the complexity scalability can be translated into the power consumption scalability of the video encoder. We investigate the rate-distortion behaviors of the complexity control parameters and establish an analytic framework to explore the P-R-D behavior of the video encoding system. Both theoretically and experimentally, we show that, using this P-R-D model, the encoding system is able to automatically adjust its complexity control parameters to match the available energy supply of the mobile device while maximizing the picture quality. The P-R-D model provides a theoretical guideline for system design and performance optimization in wireless video communication under energy constraint, especially over the wireless video sensor network.
Terçariol, César Augusto Sangaletti; Martinez, Alexandre Souto
2005-08-01
Consider a medium characterized by N points whose coordinates are randomly generated by a uniform distribution along the edges of a unitary d-dimensional hypercube. A walker leaves from each point of this disordered medium and moves according to the deterministic rule to go to the nearest point which has not been visited in the preceding mu steps (deterministic tourist walk). Each trajectory generated by this dynamics has an initial nonperiodic part of t steps (transient) and a final periodic part of p steps (attractor). The neighborhood rank probabilities are parametrized by the normalized incomplete beta function Id= I1/4 [1/2, (d+1) /2] . The joint distribution S(N) (mu,d) (t,p) is relevant, and the marginal distributions previously studied are particular cases. We show that, for the memory-less deterministic tourist walk in the euclidean space, this distribution is Sinfinity(1,d) (t,p) = [Gamma (1+ I(-1)(d)) (t+ I(-1)(d) ) /Gamma(t+p+ I(-1)(d)) ] delta(p,2), where t=0, 1,2, ... infinity, Gamma(z) is the gamma function and delta(i,j) is the Kronecker delta. The mean-field models are the random link models, which correspond to d-->infinity, and the random map model which, even for mu=0 , presents nontrivial cycle distribution [ S(N)(0,rm) (p) proportional to p(-1) ] : S(N)(0,rm) (t,p) =Gamma(N)/ {Gamma[N+1- (t+p) ] N( t+p)}. The fundamental quantities are the number of explored points n(e)=t+p and Id. Although the obtained distributions are simple, they do not follow straightforwardly and they have been validated by numerical experiments.
Optimal superdense coding over memory channels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shadman, Z.; Kampermann, H.; Bruss, D.
2011-10-15
We study the superdense coding capacity in the presence of quantum channels with correlated noise. We investigate both the cases of unitary and nonunitary encoding. Pauli channels for arbitrary dimensions are treated explicitly. The superdense coding capacity for some special channels and resource states is derived for unitary encoding. We also provide an example of a memory channel where nonunitary encoding leads to an improvement in the superdense coding capacity.
Integrated source and channel encoded digital communications system design study
NASA Technical Reports Server (NTRS)
Huth, G. K.
1974-01-01
Studies on the digital communication system for the direct communication links from ground to space shuttle and the links involving the Tracking and Data Relay Satellite (TDRS). Three main tasks were performed:(1) Channel encoding/decoding parameter optimization for forward and reverse TDRS links,(2)integration of command encoding/decoding and channel encoding/decoding; and (3) modulation coding interface study. The general communication environment is presented to provide the necessary background for the tasks and to provide an understanding of the implications of the results of the studies.
Methodology and method and apparatus for signaling with capacity optimized constellations
NASA Technical Reports Server (NTRS)
Barsoum, Maged F. (Inventor); Jones, Christopher R. (Inventor)
2011-01-01
Communication systems having transmitter, includes a coder configured to receive user bits and output encoded bits at an expanded output encoded bit rate, a mapper configured to map encoded bits to symbols in a symbol constellation, a modulator configured to generate a signal for transmission via the communication channel using symbols generated by the mapper. In addition, the receiver includes a demodulator configured to demodulate the received signal via the communication channel, a demapper configured to estimate likelihoods from the demodulated signal, a decoder that is configured to estimate decoded bits from the likelihoods generated by the demapper. Furthermore, the symbol constellation is a capacity optimized geometrically spaced symbol constellation that provides a given capacity at a reduced signal-to-noise ratio compared to a signal constellation that maximizes d.sub.min.
Proceedings of the Conference on Moments and Signal
NASA Astrophysics Data System (ADS)
Purdue, P.; Solomon, H.
1992-09-01
The focus of this paper is (1) to describe systematic methodologies for selecting nonlinear transformations for blind equalization algorithms (and thus new types of cumulants), and (2) to give an overview of the existing blind equalization algorithms and point out their strengths as well as weaknesses. It is shown that all blind equalization algorithms belong in one of the following three categories, depending where the nonlinear transformation is being applied on the data: (1) the Bussgang algorithms, where the nonlinearity is in the output of the adaptive equalization filter; (2) the polyspectra (or Higher-Order Spectra) algorithms, where the nonlinearity is in the input of the adaptive equalization filter; and (3) the algorithms where the nonlinearity is inside the adaptive filter, i.e., the nonlinear filter or neural network. We describe methodologies for selecting nonlinear transformations based on various optimality criteria such as MSE or MAP. We illustrate that such existing algorithms as Sato, Benveniste-Goursat, Godard or CMA, Stop-and-Go, and Donoho are indeed special cases of the Bussgang family of techniques when the nonlinearity is memoryless. We present results that demonstrate the polyspectra-based algorithms exhibit faster convergence rate than Bussgang algorithms. However, this improved performance is at the expense of more computations per iteration. We also show that blind equalizers based on nonlinear filters or neural networks are more suited for channels that have nonlinear distortions.
Dynamic state estimation based on Poisson spike trains—towards a theory of optimal encoding
NASA Astrophysics Data System (ADS)
Susemihl, Alex; Meir, Ron; Opper, Manfred
2013-03-01
Neurons in the nervous system convey information to higher brain regions by the generation of spike trains. An important question in the field of computational neuroscience is how these sensory neurons encode environmental information in a way which may be simply analyzed by subsequent systems. Many aspects of the form and function of the nervous system have been understood using the concepts of optimal population coding. Most studies, however, have neglected the aspect of temporal coding. Here we address this shortcoming through a filtering theory of inhomogeneous Poisson processes. We derive exact relations for the minimal mean squared error of the optimal Bayesian filter and, by optimizing the encoder, obtain optimal codes for populations of neurons. We also show that a class of non-Markovian, smooth stimuli are amenable to the same treatment, and provide results for the filtering and prediction error which hold for a general class of stochastic processes. This sets a sound mathematical framework for a population coding theory that takes temporal aspects into account. It also formalizes a number of studies which discussed temporal aspects of coding using time-window paradigms, by stating them in terms of correlation times and firing rates. We propose that this kind of analysis allows for a systematic study of temporal coding and will bring further insights into the nature of the neural code.
NASA Astrophysics Data System (ADS)
Nifontova, Galina; Zvaigzne, Maria; Baryshnikova, Maria; Korostylev, Evgeny; Ramos-Gomes, Fernanda; Alves, Frauke; Nabiev, Igor; Sukhanova, Alyona
2018-01-01
Fabrication of polyelectrolyte microcapsules and their use as carriers of drugs, fluorescent labels, and metal nanoparticles is a promising approach to designing theranostic agents. Semiconductor quantum dots (QDs) are characterized by extremely high brightness and photostability that make them attractive fluorescent labels for visualization of intracellular penetration and delivery of such microcapsules. Here, we describe an approach to design, fabricate, and characterize physico-chemical and functional properties of polyelectrolyte microcapsules encoded with water-solubilized and stabilized with three-functional polyethylene glycol derivatives core/shell QDs. Developed microcapsules were characterized by dynamic light scattering, electrophoretic mobility, scanning electronic microscopy, and fluorescence and confocal microscopy approaches, providing exact data on their size distribution, surface charge, morphological, and optical characteristics. The fluorescence lifetimes of the QD-encoded microcapsules were also measured, and their dependence on time after preparation of the microcapsules was evaluated. The optimal content of QDs used for encoding procedure providing the optimal fluorescence properties of the encoded microcapsules was determined. Finally, the intracellular microcapsule uptake by murine macrophages was demonstrated, thus confirming the possibility of efficient use of developed system for live cell imaging and visualization of microcapsule transportation and delivery within the living cells.
Local alignment of two-base encoded DNA sequence
Homer, Nils; Merriman, Barry; Nelson, Stanley F
2009-01-01
Background DNA sequence comparison is based on optimal local alignment of two sequences using a similarity score. However, some new DNA sequencing technologies do not directly measure the base sequence, but rather an encoded form, such as the two-base encoding considered here. In order to compare such data to a reference sequence, the data must be decoded into sequence. The decoding is deterministic, but the possibility of measurement errors requires searching among all possible error modes and resulting alignments to achieve an optimal balance of fewer errors versus greater sequence similarity. Results We present an extension of the standard dynamic programming method for local alignment, which simultaneously decodes the data and performs the alignment, maximizing a similarity score based on a weighted combination of errors and edits, and allowing an affine gap penalty. We also present simulations that demonstrate the performance characteristics of our two base encoded alignment method and contrast those with standard DNA sequence alignment under the same conditions. Conclusion The new local alignment algorithm for two-base encoded data has substantial power to properly detect and correct measurement errors while identifying underlying sequence variants, and facilitating genome re-sequencing efforts based on this form of sequence data. PMID:19508732
Data transmission system and method
NASA Technical Reports Server (NTRS)
Bruck, Jehoshua (Inventor); Langberg, Michael (Inventor); Sprintson, Alexander (Inventor)
2010-01-01
A method of transmitting data packets, where randomness is added to the schedule. Universal broadcast schedules using encoding and randomization techniques are also discussed, together with optimal randomized schedules and an approximation algorithm for finding near-optimal schedules.
Time Correlations in Mode Hopping of Coupled Oscillators
NASA Astrophysics Data System (ADS)
Heltberg, Mathias L.; Krishna, Sandeep; Jensen, Mogens H.
2017-05-01
We study the dynamics in a system of coupled oscillators when Arnold Tongues overlap. By varying the initial conditions, the deterministic system can be attracted to different limit cycles. Adding noise, the mode hopping between different states become a dominating part of the dynamics. We simplify the system through a Poincare section, and derive a 1D model to describe the dynamics. We explain that for some parameter values of the external oscillator, the time distribution of occupancy in a state is exponential and thus memoryless. In the general case, on the other hand, it is a sum of exponential distributions characteristic of a system with time correlations.
Kurtosis Approach Nonlinear Blind Source Separation
NASA Technical Reports Server (NTRS)
Duong, Vu A.; Stubbemd, Allen R.
2005-01-01
In this paper, we introduce a new algorithm for blind source signal separation for post-nonlinear mixtures. The mixtures are assumed to be linearly mixed from unknown sources first and then distorted by memoryless nonlinear functions. The nonlinear functions are assumed to be smooth and can be approximated by polynomials. Both the coefficients of the unknown mixing matrix and the coefficients of the approximated polynomials are estimated by the gradient descent method conditional on the higher order statistical requirements. The results of simulation experiments presented in this paper demonstrate the validity and usefulness of our approach for nonlinear blind source signal separation Keywords: Independent Component Analysis, Kurtosis, Higher order statistics.
Some practical universal noiseless coding techniques
NASA Technical Reports Server (NTRS)
Rice, R. F.
1979-01-01
Some practical adaptive techniques for the efficient noiseless coding of a broad class of such data sources are developed and analyzed. Algorithms are designed for coding discrete memoryless sources which have a known symbol probability ordering but unknown probability values. A general applicability of these algorithms to solving practical problems is obtained because most real data sources can be simply transformed into this form by appropriate preprocessing. These algorithms have exhibited performance only slightly above all entropy values when applied to real data with stationary characteristics over the measurement span. Performance considerably under a measured average data entropy may be observed when data characteristics are changing over the measurement span.
Liu, Cunbao; Yang, Xu; Yao, Yufeng; Huang, Weiwei; Sun, Wenjia; Ma, Yanbing
2014-05-01
Two versions of an optimized gene that encodes human papilloma virus type 16 major protein L1 were designed according to the codon usage frequency of Pichia pastoris. Y16 was highly expressed in both P. pastoris and Hansenula polymorpha. M16 expression was as efficient as that of Y16 in P. pastoris, but merely detectable in H. polymorpha even though transcription levels of M16 and Y16 were similar. H. polymorpha had a unique codon usage frequency that contains many more rare codons than Saccharomyces cerevisiae or P. pastoris. These findings indicate that even codon-optimized genes that are expressed well in S. cerevisiae and P. pastoris may be inefficiently expressed in H. polymorpha; thus rare codons must be avoided when universal optimized gene versions are designed to facilitate expression in a variety of yeast expression systems, especially H. polymorpha is involved.
Error-based analysis of optimal tuning functions explains phenomena observed in sensory neurons.
Yaeli, Steve; Meir, Ron
2010-01-01
Biological systems display impressive capabilities in effectively responding to environmental signals in real time. There is increasing evidence that organisms may indeed be employing near optimal Bayesian calculations in their decision-making. An intriguing question relates to the properties of optimal encoding methods, namely determining the properties of neural populations in sensory layers that optimize performance, subject to physiological constraints. Within an ecological theory of neural encoding/decoding, we show that optimal Bayesian performance requires neural adaptation which reflects environmental changes. Specifically, we predict that neuronal tuning functions possess an optimal width, which increases with prior uncertainty and environmental noise, and decreases with the decoding time window. Furthermore, even for static stimuli, we demonstrate that dynamic sensory tuning functions, acting at relatively short time scales, lead to improved performance. Interestingly, the narrowing of tuning functions as a function of time was recently observed in several biological systems. Such results set the stage for a functional theory which may explain the high reliability of sensory systems, and the utility of neuronal adaptation occurring at multiple time scales.
ERIC Educational Resources Information Center
Parsons, Michael W.; Haut, Marc W.; Lemieux, Susan K.; Moran, Maria T.; Leach, Sharon G.
2006-01-01
The existence of a rostrocaudal gradient of medial temporal lobe (MTL) activation during memory encoding has historically received support from positron emission tomography studies, but less so from functional MRI (FMRI) studies. More recently, FMRI studies have demonstrated that characteristics of the stimuli can affect the location of activation…
Towards predicting the encoding capability of MR fingerprinting sequences.
Sommer, K; Amthor, T; Doneva, M; Koken, P; Meineke, J; Börnert, P
2017-09-01
Sequence optimization and appropriate sequence selection is still an unmet need in magnetic resonance fingerprinting (MRF). The main challenge in MRF sequence design is the lack of an appropriate measure of the sequence's encoding capability. To find such a measure, three different candidates for judging the encoding capability have been investigated: local and global dot-product-based measures judging dictionary entry similarity as well as a Monte Carlo method that evaluates the noise propagation properties of an MRF sequence. Consistency of these measures for different sequence lengths as well as the capability to predict actual sequence performance in both phantom and in vivo measurements was analyzed. While the dot-product-based measures yielded inconsistent results for different sequence lengths, the Monte Carlo method was in a good agreement with phantom experiments. In particular, the Monte Carlo method could accurately predict the performance of different flip angle patterns in actual measurements. The proposed Monte Carlo method provides an appropriate measure of MRF sequence encoding capability and may be used for sequence optimization. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Kang, Donghun; Lee, Jungeon; Jung, Jongpil; Lee, Chul-Hee; Kyung, Chong-Min
2014-09-01
In mobile video systems powered by battery, reducing the encoder's compression energy consumption is critical to prolong its lifetime. Previous Energy-rate-distortion (E-R-D) optimization methods based on a software codec is not suitable for practical mobile camera systems because the energy consumption is too large and encoding rate is too low. In this paper, we propose an E-R-D model for the hardware codec based on the gate-level simulation framework to measure the switching activity and the energy consumption. From the proposed E-R-D model, an energy minimizing algorithm for mobile video camera sensor have been developed with the GOP (Group of Pictures) size and QP(Quantization Parameter) as run-time control variables. Our experimental results show that the proposed algorithm provides up to 31.76% of energy consumption saving while satisfying the rate and distortion constraints.
Franzini, Raphael M; Samain, Florent; Abd Elrahman, Maaly; Mikutis, Gediminas; Nauer, Angela; Zimmermann, Mauro; Scheuermann, Jörg; Hall, Jonathan; Neri, Dario
2014-08-20
DNA-encoded chemical libraries are collections of small molecules, attached to DNA fragments serving as identification barcodes, which can be screened against multiple protein targets, thus facilitating the drug discovery process. The preparation of large DNA-encoded chemical libraries crucially depends on the availability of robust synthetic methods, which enable the efficient conjugation to oligonucleotides of structurally diverse building blocks, sharing a common reactive group. Reactions of DNA derivatives with amines and/or carboxylic acids are particularly attractive for the synthesis of encoded libraries, in view of the very large number of building blocks that are commercially available. However, systematic studies on these reactions in the presence of DNA have not been reported so far. We first investigated conditions for the coupling of primary amines to oligonucleotides, using either a nucleophilic attack on chloroacetamide derivatives or a reductive amination on aldehyde-modified DNA. While both methods could be used for the production of secondary amines, the reductive amination approach was generally associated with higher yields and better purity. In a second endeavor, we optimized conditions for the coupling of a diverse set of 501 carboxylic acids to DNA derivatives, carrying primary and secondary amine functions. The coupling efficiency was generally higher for primary amines, compared to secondary amine substituents, but varied considerably depending on the structure of the acids and on the synthetic methods used. Optimal reaction conditions could be found for certain sets of compounds (with conversions >80%), but multiple reaction schemes are needed when assembling large libraries with highly diverse building blocks. The reactions and experimental conditions presented in this article should facilitate the synthesis of future DNA-encoded chemical libraries, while outlining the synthetic challenges that remain to be overcome.
Signal-to-noise ratio comparison of encoding methods for hyperpolarized noble gas MRI
NASA Technical Reports Server (NTRS)
Zhao, L.; Venkatesh, A. K.; Albert, M. S.; Panych, L. P.
2001-01-01
Some non-Fourier encoding methods such as wavelet and direct encoding use spatially localized bases. The spatial localization feature of these methods enables optimized encoding for improved spatial and temporal resolution during dynamically adaptive MR imaging. These spatially localized bases, however, have inherently reduced image signal-to-noise ratio compared with Fourier or Hadamad encoding for proton imaging. Hyperpolarized noble gases, on the other hand, have quite different MR properties compared to proton, primarily the nonrenewability of the signal. It could be expected, therefore, that the characteristics of image SNR with respect to encoding method will also be very different from hyperpolarized noble gas MRI compared to proton MRI. In this article, hyperpolarized noble gas image SNRs of different encoding methods are compared theoretically using a matrix description of the encoding process. It is shown that image SNR for hyperpolarized noble gas imaging is maximized for any orthonormal encoding method. Methods are then proposed for designing RF pulses to achieve normalized encoding profiles using Fourier, Hadamard, wavelet, and direct encoding methods for hyperpolarized noble gases. Theoretical results are confirmed with hyperpolarized noble gas MRI experiments. Copyright 2001 Academic Press.
Kennerley, Steven W.; Wallis, Jonathan D.
2009-01-01
Damage to the frontal lobe can cause severe decision-making impairments. A mechanism that may underlie this is that neurons in the frontal cortex encode many variables that contribute to the valuation of a choice, such as its costs, benefits and probability of success. However, optimal decision-making requires that one considers these variables, not only when faced with the choice, but also when evaluating the outcome of the choice, in order to adapt future behaviour appropriately. To examine the role of the frontal cortex in encoding the value of different choice outcomes, we simultaneously recorded the activity of multiple single neurons in the anterior cingulate cortex (ACC), orbitofrontal cortex (OFC) and lateral prefrontal cortex (LPFC) while subjects evaluated the outcome of choices involving manipulations of probability, payoff and cost. Frontal neurons encoded many of the parameters that enabled the calculation of the value of these variables, including the onset and offset of reward and the amount of work performed, and often encoded the value of outcomes across multiple decision variables. In addition, many neurons encoded both the predicted outcome during the choice phase of the task as well as the experienced outcome in the outcome phase of the task. These patterns of selectivity were more prevalent in ACC relative to OFC and LPFC. These results support a role for the frontal cortex, principally ACC, in selecting between choice alternatives and evaluating the outcome of that selection thereby ensuring that choices are optimal and adaptive. PMID:19453638
Multipath search coding of stationary signals with applications to speech
NASA Astrophysics Data System (ADS)
Fehn, H. G.; Noll, P.
1982-04-01
This paper deals with the application of multipath search coding (MSC) concepts to the coding of stationary memoryless and correlated sources, and of speech signals, at a rate of one bit per sample. Use is made of three MSC classes: (1) codebook coding, or vector quantization, (2) tree coding, and (3) trellis coding. This paper explains the performances of these coders and compares them both with those of conventional coders and with rate-distortion bounds. The potentials of MSC coding strategies are demonstrated by illustrations. The paper reports also on results of MSC coding of speech, where both the strategy of adaptive quantization and of adaptive prediction were included in coder design.
Robustness of the non-Markovian Alzheimer walk under stochastic perturbation
NASA Astrophysics Data System (ADS)
Cressoni, J. C.; da Silva, L. R.; Viswanathan, G. M.; da Silva, M. A. A.
2012-12-01
The elephant walk model originally proposed by Schütz and Trimper to investigate non-Markovian processes led to the investigation of a series of other random-walk models. Of these, the best known is the Alzheimer walk model, because it was the first model shown to have amnestically induced persistence —i.e. superdiffusion caused by loss of memory. Here we study the robustness of the Alzheimer walk by adding a memoryless stochastic perturbation. Surprisingly, the solution of the perturbed model can be formally reduced to the solutions of the unperturbed model. Specifically, we give an exact solution of the perturbed model by finding a surjective mapping to the unperturbed model.
Classical capacity of Gaussian thermal memory channels
NASA Astrophysics Data System (ADS)
De Palma, G.; Mari, A.; Giovannetti, V.
2014-10-01
The classical capacity of phase-invariant Gaussian channels has been recently determined under the assumption that such channels are memoryless. In this work we generalize this result by deriving the classical capacity of a model of quantum memory channel, in which the output states depend on the previous input states. In particular we extend the analysis of Lupo et al. [Phys. Rev. Lett. 104, 030501 (2010), 10.1103/PhysRevLett.104.030501 and Phys. Rev. A 82, 032312 (2010), 10.1103/PhysRevA.82.032312] from quantum limited channels to thermal attenuators and thermal amplifiers. Our result applies in many situations in which the physical communication channel is affected by nonzero memory and by thermal noise.
An Isomorphism between Lyapunov Exponents and Shannon's Channel Capacity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friedland, Gerald; Metere, Alfredo
We demonstrate that discrete Lyapunov exponents are isomorphic to numeric overflows of the capacity of an arbitrary noiseless and memoryless channel in a Shannon communication model with feedback. The isomorphism allows the understanding of Lyapunov exponents in terms of Information Theory, rather than the traditional definitions in chaos theory. The result also implies alternative approaches to the calculation of related quantities, such as the Kolmogorov Sinai entropy which has been linked to thermodynamic entropy. This work provides a bridge between fundamental physics and information theory. It suggests, among other things, that machine learning and other information theory methods can bemore » employed at the core of physics simulations.« less
The emergence of Zipf's law - Spontaneous encoding optimization by users of a command language
NASA Technical Reports Server (NTRS)
Ellis, S. R.; Hitchcock, R. J.
1986-01-01
The distribution of commands issued by experienced users of a computer operating system allowing command customization tends to conform to Zipf's law. This result documents the emergence of a statistical property of natural language as users master an artificial language. Analysis of Zipf's law by Mandelbrot and Cherry shows that its emergence in the computer interaction of experienced users may be interpreted as evidence that these users optimize their encoding of commands. Accordingly, the extent to which users of a command language exhibit Zipf's law can provide a metric of the naturalness and efficiency with which that language is used.
DOE Office of Scientific and Technical Information (OSTI.GOV)
MAGEE,GLEN I.
Computers transfer data in a number of different ways. Whether through a serial port, a parallel port, over a modem, over an ethernet cable, or internally from a hard disk to memory, some data will be lost. To compensate for that loss, numerous error detection and correction algorithms have been developed. One of the most common error correction codes is the Reed-Solomon code, which is a special subset of BCH (Bose-Chaudhuri-Hocquenghem) linear cyclic block codes. In the AURA project, an unmanned aircraft sends the data it collects back to earth so it can be analyzed during flight and possible flightmore » modifications made. To counter possible data corruption during transmission, the data is encoded using a multi-block Reed-Solomon implementation with a possibly shortened final block. In order to maximize the amount of data transmitted, it was necessary to reduce the computation time of a Reed-Solomon encoding to three percent of the processor's time. To achieve such a reduction, many code optimization techniques were employed. This paper outlines the steps taken to reduce the processing time of a Reed-Solomon encoding and the insight into modern optimization techniques gained from the experience.« less
A Degree Distribution Optimization Algorithm for Image Transmission
NASA Astrophysics Data System (ADS)
Jiang, Wei; Yang, Junjie
2016-09-01
Luby Transform (LT) code is the first practical implementation of digital fountain code. The coding behavior of LT code is mainly decided by the degree distribution which determines the relationship between source data and codewords. Two degree distributions are suggested by Luby. They work well in typical situations but not optimally in case of finite encoding symbols. In this work, the degree distribution optimization algorithm is proposed to explore the potential of LT code. Firstly selection scheme of sparse degrees for LT codes is introduced. Then probability distribution is optimized according to the selected degrees. In image transmission, bit stream is sensitive to the channel noise and even a single bit error may cause the loss of synchronization between the encoder and the decoder. Therefore the proposed algorithm is designed for image transmission situation. Moreover, optimal class partition is studied for image transmission with unequal error protection. The experimental results are quite promising. Compared with LT code with robust soliton distribution, the proposed algorithm improves the final quality of recovered images obviously with the same overhead.
Method of generating features optimal to a dataset and classifier
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bruillard, Paul J.; Gosink, Luke J.; Jarman, Kenneth D.
A method of generating features optimal to a particular dataset and classifier is disclosed. A dataset of messages is inputted and a classifier is selected. An algebra of features is encoded. Computable features that are capable of describing the dataset from the algebra of features are selected. Irredundant features that are optimal for the classifier and the dataset are selected.
NASA Technical Reports Server (NTRS)
Lewis, Michael
1994-01-01
Statistical encoding techniques enable the reduction of the number of bits required to encode a set of symbols, and are derived from their probabilities. Huffman encoding is an example of statistical encoding that has been used for error-free data compression. The degree of compression given by Huffman encoding in this application can be improved by the use of prediction methods. These replace the set of elevations by a set of corrections that have a more advantageous probability distribution. In particular, the method of Lagrange Multipliers for minimization of the mean square error has been applied to local geometrical predictors. Using this technique, an 8-point predictor achieved about a 7 percent improvement over an existing simple triangular predictor.
Stochastic information transfer from cochlear implant electrodes to auditory nerve fibers
NASA Astrophysics Data System (ADS)
Gao, Xiao; Grayden, David B.; McDonnell, Mark D.
2014-08-01
Cochlear implants, also called bionic ears, are implanted neural prostheses that can restore lost human hearing function by direct electrical stimulation of auditory nerve fibers. Previously, an information-theoretic framework for numerically estimating the optimal number of electrodes in cochlear implants has been devised. This approach relies on a model of stochastic action potential generation and a discrete memoryless channel model of the interface between the array of electrodes and the auditory nerve fibers. Using these models, the stochastic information transfer from cochlear implant electrodes to auditory nerve fibers is estimated from the mutual information between channel inputs (the locations of electrodes) and channel outputs (the set of electrode-activated nerve fibers). Here we describe a revised model of the channel output in the framework that avoids the side effects caused by an "ambiguity state" in the original model and also makes fewer assumptions about perceptual processing in the brain. A detailed comparison of how different assumptions on fibers and current spread modes impact on the information transfer in the original model and in the revised model is presented. We also mathematically derive an upper bound on the mutual information in the revised model, which becomes tighter as the number of electrodes increases. We found that the revised model leads to a significantly larger maximum mutual information and corresponding number of electrodes compared with the original model and conclude that the assumptions made in this part of the modeling framework are crucial to the model's overall utility.
Device-independent two-party cryptography secure against sequential attacks
NASA Astrophysics Data System (ADS)
Kaniewski, Jędrzej; Wehner, Stephanie
2016-05-01
The goal of two-party cryptography is to enable two parties, Alice and Bob, to solve common tasks without the need for mutual trust. Examples of such tasks are private access to a database, and secure identification. Quantum communication enables security for all of these problems in the noisy-storage model by sending more signals than the adversary can store in a certain time frame. Here, we initiate the study of device-independent (DI) protocols for two-party cryptography in the noisy-storage model. Specifically, we present a relatively easy to implement protocol for a cryptographic building block known as weak string erasure and prove its security even if the devices used in the protocol are prepared by the dishonest party. DI two-party cryptography is made challenging by the fact that Alice and Bob do not trust each other, which requires new techniques to establish security. We fully analyse the case of memoryless devices (for which sequential attacks are optimal) and the case of sequential attacks for arbitrary devices. The key ingredient of the proof, which might be of independent interest, is an explicit (and tight) relation between the violation of the Clauser-Horne-Shimony-Holt inequality observed by Alice and Bob and uncertainty generated by Alice against Bob who is forced to measure his system before finding out Alice’s setting (guessing with postmeasurement information). In particular, we show that security is possible for arbitrarily small violation.
Two-layer contractive encodings for learning stable nonlinear features.
Schulz, Hannes; Cho, Kyunghyun; Raiko, Tapani; Behnke, Sven
2015-04-01
Unsupervised learning of feature hierarchies is often a good strategy to initialize deep architectures for supervised learning. Most existing deep learning methods build these feature hierarchies layer by layer in a greedy fashion using either auto-encoders or restricted Boltzmann machines. Both yield encoders which compute linear projections of input followed by a smooth thresholding function. In this work, we demonstrate that these encoders fail to find stable features when the required computation is in the exclusive-or class. To overcome this limitation, we propose a two-layer encoder which is less restricted in the type of features it can learn. The proposed encoder is regularized by an extension of previous work on contractive regularization. This proposed two-layer contractive encoder potentially poses a more difficult optimization problem, and we further propose to linearly transform hidden neurons of the encoder to make learning easier. We demonstrate the advantages of the two-layer encoders qualitatively on artificially constructed datasets as well as commonly used benchmark datasets. We also conduct experiments on a semi-supervised learning task and show the benefits of the proposed two-layer encoders trained with the linear transformation of perceptrons. Copyright © 2014 Elsevier Ltd. All rights reserved.
Extended depth of field in an intrinsically wavefront-encoded biometric iris camera
NASA Astrophysics Data System (ADS)
Bergkoetter, Matthew D.; Bentley, Julie L.
2014-12-01
This work describes a design process which greatly increases the depth of field of a simple three-element lens system intended for biometric iris recognition. The system is optimized to produce a point spread function which is insensitive to defocus, so that recorded images may be deconvolved without knowledge of the exact object distance. This is essentially a variation on the technique of wavefront encoding, however the desired encoding effect is achieved by aberrations intrinsic to the lens system itself, without the need for a pupil phase mask.
Enzymes and Enzyme Activity Encoded by Nonenveloped Viruses.
Azad, Kimi; Banerjee, Manidipa; Johnson, John E
2017-09-29
Viruses are obligate intracellular parasites that rely on host cell machineries for their replication and survival. Although viruses tend to make optimal use of the host cell protein repertoire, they need to encode essential enzymatic or effector functions that may not be available or accessible in the host cellular milieu. The enzymes encoded by nonenveloped viruses-a group of viruses that lack any lipid coating or envelope-play vital roles in all the stages of the viral life cycle. This review summarizes the structural, biochemical, and mechanistic information available for several classes of enzymes and autocatalytic activity encoded by nonenveloped viruses. Advances in research and development of antiviral inhibitors targeting specific viral enzymes are also highlighted.
Optimal Achievable Encoding for Brain Machine Interface
2017-12-22
dictionary-based encoding approach to translate a visual image into sequential patterns of electrical stimulation in real time , in a manner that...including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and...networks, and by applying linear decoding to complete recorded populations of retinal ganglion cells for the first time . Third, we developed a greedy
A novel frame-level constant-distortion bit allocation for smooth H.264/AVC video quality
NASA Astrophysics Data System (ADS)
Liu, Li; Zhuang, Xinhua
2009-01-01
It is known that quality fluctuation has a major negative effect on visual perception. In previous work, we introduced a constant-distortion bit allocation method [1] for H.263+ encoder. However, the method in [1] can not be adapted to the newest H.264/AVC encoder directly as the well-known chicken-egg dilemma resulted from the rate-distortion optimization (RDO) decision process. To solve this problem, we propose a new two stage constant-distortion bit allocation (CDBA) algorithm with enhanced rate control for H.264/AVC encoder. In stage-1, the algorithm performs RD optimization process with a constant quantization QP. Based on prediction residual signals from stage-1 and target distortion for smooth video quality purpose, the frame-level bit target is allocated by using a close-form approximations of ratedistortion relationship similar to [1], and a fast stage-2 encoding process is performed with enhanced basic unit rate control. Experimental results show that, compared with original rate control algorithm provided by H.264/AVC reference software JM12.1, the proposed constant-distortion frame-level bit allocation scheme reduces quality fluctuation and delivers much smoother PSNR on all testing sequences.
Sequeira, Ana Filipa; Brás, Joana L A; Guerreiro, Catarina I P D; Vincentelli, Renaud; Fontes, Carlos M G A
2016-12-01
Gene synthesis is becoming an important tool in many fields of recombinant DNA technology, including recombinant protein production. De novo gene synthesis is quickly replacing the classical cloning and mutagenesis procedures and allows generating nucleic acids for which no template is available. In addition, when coupled with efficient gene design algorithms that optimize codon usage, it leads to high levels of recombinant protein expression. Here, we describe the development of an optimized gene synthesis platform that was applied to the large scale production of small genes encoding venom peptides. This improved gene synthesis method uses a PCR-based protocol to assemble synthetic DNA from pools of overlapping oligonucleotides and was developed to synthesise multiples genes simultaneously. This technology incorporates an accurate, automated and cost effective ligation independent cloning step to directly integrate the synthetic genes into an effective Escherichia coli expression vector. The robustness of this technology to generate large libraries of dozens to thousands of synthetic nucleic acids was demonstrated through the parallel and simultaneous synthesis of 96 genes encoding animal toxins. An automated platform was developed for the large-scale synthesis of small genes encoding eukaryotic toxins. Large scale recombinant expression of synthetic genes encoding eukaryotic toxins will allow exploring the extraordinary potency and pharmacological diversity of animal venoms, an increasingly valuable but unexplored source of lead molecules for drug discovery.
Probabilistic vs. non-probabilistic approaches to the neurobiology of perceptual decision-making
Drugowitsch, Jan; Pouget, Alexandre
2012-01-01
Optimal binary perceptual decision making requires accumulation of evidence in the form of a probability distribution that specifies the probability of the choices being correct given the evidence so far. Reward rates can then be maximized by stopping the accumulation when the confidence about either option reaches a threshold. Behavioral and neuronal evidence suggests that humans and animals follow such a probabilitistic decision strategy, although its neural implementation has yet to be fully characterized. Here we show that that diffusion decision models and attractor network models provide an approximation to the optimal strategy only under certain circumstances. In particular, neither model type is sufficiently flexible to encode the reliability of both the momentary and the accumulated evidence, which is a pre-requisite to accumulate evidence of time-varying reliability. Probabilistic population codes, in contrast, can encode these quantities and, as a consequence, have the potential to implement the optimal strategy accurately. PMID:22884815
An NN-Based SRD Decomposition Algorithm and Its Application in Nonlinear Compensation
Yan, Honghang; Deng, Fang; Sun, Jian; Chen, Jie
2014-01-01
In this study, a neural network-based square root of descending (SRD) order decomposition algorithm for compensating for nonlinear data generated by sensors is presented. The study aims at exploring the optimized decomposition of data 1.00,0.00,0.00 and minimizing the computational complexity and memory space of the training process. A linear decomposition algorithm, which automatically finds the optimal decomposition of N subparts and reduces the training time to 1N and memory cost to 1N, has been implemented on nonlinear data obtained from an encoder. Particular focus is given to the theoretical access of estimating the numbers of hidden nodes and the precision of varying the decomposition method. Numerical experiments are designed to evaluate the effect of this algorithm. Moreover, a designed device for angular sensor calibration is presented. We conduct an experiment that samples the data of an encoder and compensates for the nonlinearity of the encoder to testify this novel algorithm. PMID:25232912
Optimizing Filter-Probe Diffusion Weighting in the Rat Spinal Cord for Human Translation
Budde, Matthew D.; Skinner, Nathan P.; Muftuler, L. Tugan; Schmit, Brian D.; Kurpad, Shekar N.
2017-01-01
Diffusion tensor imaging (DTI) is a promising biomarker of spinal cord injury (SCI). In the acute aftermath, DTI in SCI animal models consistently demonstrates high sensitivity and prognostic performance, yet translation of DTI to acute human SCI has been limited. In addition to technical challenges, interpretation of the resulting metrics is ambiguous, with contributions in the acute setting from both axonal injury and edema. Novel diffusion MRI acquisition strategies such as double diffusion encoding (DDE) have recently enabled detection of features not available with DTI or similar methods. In this work, we perform a systematic optimization of DDE using simulations and an in vivo rat model of SCI and subsequently implement the protocol to the healthy human spinal cord. First, two complementary DDE approaches were evaluated using an orientationally invariant or a filter-probe diffusion encoding approach. While the two methods were similar in their ability to detect acute SCI, the filter-probe DDE approach had greater predictive power for functional outcomes. Next, the filter-probe DDE was compared to an analogous single diffusion encoding (SDE) approach, with the results indicating that in the spinal cord, SDE provides similar contrast with improved signal to noise. In the SCI rat model, the filter-probe SDE scheme was coupled with a reduced field of view (rFOV) excitation, and the results demonstrate high quality maps of the spinal cord without contamination from edema and cerebrospinal fluid, thereby providing high sensitivity to injury severity. The optimized protocol was demonstrated in the healthy human spinal cord using the commercially-available diffusion MRI sequence with modifications only to the diffusion encoding directions. Maps of axial diffusivity devoid of CSF partial volume effects were obtained in a clinically feasible imaging time with a straightforward analysis and variability comparable to axial diffusivity derived from DTI. Overall, the results and optimizations describe a protocol that mitigates several difficulties with DTI of the spinal cord. Detection of acute axonal damage in the injured or diseased spinal cord will benefit the optimized filter-probe diffusion MRI protocol outlined here. PMID:29311786
Fuel management optimization using genetic algorithms and expert knowledge
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeChaine, M.D.; Feltus, M.A.
1996-09-01
The CIGARO fuel management optimization code based on genetic algorithms is described and tested. The test problem optimized the core lifetime for a pressurized water reactor with a penalty function constraint on the peak normalized power. A bit-string genotype encoded the loading patterns, and genotype bias was reduced with additional bits. Expert knowledge about fuel management was incorporated into the genetic algorithm. Regional crossover exchanged physically adjacent fuel assemblies and improved the optimization slightly. Biasing the initial population toward a known priority table significantly improved the optimization.
Surveying multidisciplinary aspects in real-time distributed coding for Wireless Sensor Networks.
Braccini, Carlo; Davoli, Franco; Marchese, Mario; Mongelli, Maurizio
2015-01-27
Wireless Sensor Networks (WSNs), where a multiplicity of sensors observe a physical phenomenon and transmit their measurements to one or more sinks, pertain to the class of multi-terminal source and channel coding problems of Information Theory. In this category, "real-time" coding is often encountered for WSNs, referring to the problem of finding the minimum distortion (according to a given measure), under transmission power constraints, attainable by encoding and decoding functions, with stringent limits on delay and complexity. On the other hand, the Decision Theory approach seeks to determine the optimal coding/decoding strategies or some of their structural properties. Since encoder(s) and decoder(s) possess different information, though sharing a common goal, the setting here is that of Team Decision Theory. A more pragmatic vision rooted in Signal Processing consists of fixing the form of the coding strategies (e.g., to linear functions) and, consequently, finding the corresponding optimal decoding strategies and the achievable distortion, generally by applying parametric optimization techniques. All approaches have a long history of past investigations and recent results. The goal of the present paper is to provide the taxonomy of the various formulations, a survey of the vast related literature, examples from the authors' own research, and some highlights on the inter-play of the different theories.
Bilayer Protograph Codes for Half-Duplex Relay Channels
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; VanNguyen, Thuy; Nosratinia, Aria
2013-01-01
Direct to Earth return links are limited by the size and power of lander devices. A standard alternative is provided by a two-hops return link: a proximity link (from lander to orbiter relay) and a deep-space link (from orbiter relay to Earth). Although direct to Earth return links are limited by the size and power of lander devices, using an additional link and a proposed coding for relay channels, one can obtain a more reliable signal. Although significant progress has been made in the relay coding problem, existing codes must be painstakingly optimized to match to a single set of channel conditions, many of them do not offer easy encoding, and most of them do not have structured design. A high-performing LDPC (low-density parity-check) code for the relay channel addresses simultaneously two important issues: a code structure that allows low encoding complexity, and a flexible rate-compatible code that allows matching to various channel conditions. Most of the previous high-performance LDPC codes for the relay channel are tightly optimized for a given channel quality, and are not easily adapted without extensive re-optimization for various channel conditions. This code for the relay channel combines structured design and easy encoding with rate compatibility to allow adaptation to the three links involved in the relay channel, and furthermore offers very good performance. The proposed code is constructed by synthesizing a bilayer structure with a pro to graph. In addition to the contribution to relay encoding, an improved family of protograph codes was produced for the point-to-point AWGN (additive white Gaussian noise) channel whose high-rate members enjoy thresholds that are within 0.07 dB of capacity. These LDPC relay codes address three important issues in an integrative manner: low encoding complexity, modular structure allowing for easy design, and rate compatibility so that the code can be easily matched to a variety of channel conditions without extensive re-optimization. The main problem of half-duplex relay coding can be reduced to the simultaneous design of two codes at two rates and two SNRs (signal-to-noise ratios), such that one is a subset of the other. This problem can be addressed by forceful optimization, but a clever method of addressing this problem is via the bilayer lengthened (BL) LDPC structure. This method uses a bilayer Tanner graph to make the two codes while using a concept of "parity forwarding" with subsequent successive decoding that removes the need to directly address the issue of uneven SNRs among the symbols of a given codeword. This method is attractive in that it addresses some of the main issues in the design of relay codes, but it does not by itself give rise to highly structured codes with simple encoding, nor does it give rate-compatible codes. The main contribution of this work is to construct a class of codes that simultaneously possess a bilayer parity- forwarding mechanism, while also benefiting from the properties of protograph codes having an easy encoding, a modular design, and being a rate-compatible code.
Convolutional code performance in planetary entry channels
NASA Technical Reports Server (NTRS)
Modestino, J. W.
1974-01-01
The planetary entry channel is modeled for communication purposes representing turbulent atmospheric scattering effects. The performance of short and long constraint length convolutional codes is investigated in conjunction with coherent BPSK modulation and Viterbi maximum likelihood decoding. Algorithms for sequential decoding are studied in terms of computation and/or storage requirements as a function of the fading channel parameters. The performance of the coded coherent BPSK system is compared with the coded incoherent MFSK system. Results indicate that: some degree of interleaving is required to combat time correlated fading of channel; only modest amounts of interleaving are required to approach performance of memoryless channel; additional propagational results are required on the phase perturbation process; and the incoherent MFSK system is superior when phase tracking errors are considered.
Information transmission over an amplitude damping channel with an arbitrary degree of memory
NASA Astrophysics Data System (ADS)
D'Arrigo, Antonio; Benenti, Giuliano; Falci, Giuseppe; Macchiavello, Chiara
2015-12-01
We study the performance of a partially correlated amplitude damping channel acting on two qubits. We derive lower bounds for the single-shot classical capacity by studying two kinds of quantum ensembles, one which allows us to maximize the Holevo quantity for the memoryless channel and the other allowing the same task but for the full-memory channel. In these two cases we also show the amount of entanglement which is involved in achieving the maximum of the Holevo quantity. For the single-shot quantum capacity we discuss both a lower and an upper bound, achieving a good estimate for high values of the channel transmissivity. We finally compute the entanglement-assisted classical channel capacity.
Localization Transition Induced by Learning in Random Searches
NASA Astrophysics Data System (ADS)
Falcón-Cortés, Andrea; Boyer, Denis; Giuggioli, Luca; Majumdar, Satya N.
2017-10-01
We solve an adaptive search model where a random walker or Lévy flight stochastically resets to previously visited sites on a d -dimensional lattice containing one trapping site. Because of reinforcement, a phase transition occurs when the resetting rate crosses a threshold above which nondiffusive stationary states emerge, localized around the inhomogeneity. The threshold depends on the trapping strength and on the walker's return probability in the memoryless case. The transition belongs to the same class as the self-consistent theory of Anderson localization. These results show that similarly to many living organisms and unlike the well-studied Markovian walks, non-Markov movement processes can allow agents to learn about their environment and promise to bring adaptive solutions in search tasks.
Simulation program of nonlinearities applied to telecommunication systems
NASA Technical Reports Server (NTRS)
Thomas, C.
1979-01-01
In any satellite communication system, the problems of distorsion created by nonlinear devices or systems must be considered. The subject of this paper is the use of the Fast Fourier Transform (F.F.T.) in the prediction of the intermodulation performance of amplifiers, mixers, filters. A nonlinear memory-less model is chosen to simulate amplitude and phase nonlinearities of the device in the simulation program written in FORTRAN 4. The experimentally observed nonlinearity parameters of a low noise 3.7-4.2 GHz amplifier are related to the gain and phase coefficients of Fourier Service Series. The measured results are compared with those calculated from the simulation in the cases where the input signal is composed of two, three carriers and noise power density.
Lorenz, Felix K. M.; Wilde, Susanne; Voigt, Katrin; Kieback, Elisa; Mosetter, Barbara; Schendel, Dolores J.; Uckert, Wolfgang
2015-01-01
Codon optimization of nucleotide sequences is a widely used method to achieve high levels of transgene expression for basic and clinical research. Until now, immunological side effects have not been described. To trigger T cell responses against human papillomavirus, we incubated T cells with dendritic cells that were pulsed with RNA encoding the codon-optimized E7 oncogene. All T cell receptors isolated from responding T cell clones recognized target cells expressing the codon-optimized E7 gene but not the wild type E7 sequence. Epitope mapping revealed recognition of a cryptic epitope from the +3 alternative reading frame of codon-optimized E7, which is not encoded by the wild type E7 sequence. The introduction of a stop codon into the +3 alternative reading frame protected the transgene product from recognition by T cell receptor gene-modified T cells. This is the first experimental study demonstrating that codon optimization can render a transgene artificially immunogenic through generation of a dominant cryptic epitope. This finding may be of great importance for the clinical field of gene therapy to avoid rejection of gene-corrected cells and for the design of DNA- and RNA-based vaccines, where codon optimization may artificially add a strong immunogenic component to the vaccine. PMID:25799237
Attending Globally or Locally: Incidental Learning of Optimal Visual Attention Allocation
ERIC Educational Resources Information Center
Beck, Melissa R.; Goldstein, Rebecca R.; van Lamsweerde, Amanda E.; Ericson, Justin M.
2018-01-01
Attention allocation determines the information that is encoded into memory. Can participants learn to optimally allocate attention based on what types of information are most likely to change? The current study examined whether participants could incidentally learn that changes to either high spatial frequency (HSF) or low spatial frequency (LSF)…
Optimal Weight Assignment for a Chinese Signature File.
ERIC Educational Resources Information Center
Liang, Tyne; And Others
1996-01-01
Investigates the performance of a character-based Chinese text retrieval scheme in which monogram keys and bigram keys are encoded into document signatures. Tests and verifies the theoretical predictions of the optimal weight assignments and the minimal false hit rate in experiments using a real Chinese corpus for disyllabic queries of different…
Optimizing inhomogeneous spin ensembles for quantum memory
NASA Astrophysics Data System (ADS)
Bensky, Guy; Petrosyan, David; Majer, Johannes; Schmiedmayer, Jörg; Kurizki, Gershon
2012-07-01
We propose a method to maximize the fidelity of quantum memory implemented by a spectrally inhomogeneous spin ensemble. The method is based on preselecting the optimal spectral portion of the ensemble by judiciously designed pulses. This leads to significant improvement of the transfer and storage of quantum information encoded in the microwave or optical field.
Correlation estimation and performance optimization for distributed image compression
NASA Astrophysics Data System (ADS)
He, Zhihai; Cao, Lei; Cheng, Hui
2006-01-01
Correlation estimation plays a critical role in resource allocation and rate control for distributed data compression. A Wyner-Ziv encoder for distributed image compression is often considered as a lossy source encoder followed by a lossless Slepian-Wolf encoder. The source encoder consists of spatial transform, quantization, and bit plane extraction. In this work, we find that Gray code, which has been extensively used in digital modulation, is able to significantly improve the correlation between the source data and its side information. Theoretically, we analyze the behavior of Gray code within the context of distributed image compression. Using this theoretical model, we are able to efficiently allocate the bit budget and determine the code rate of the Slepian-Wolf encoder. Our experimental results demonstrate that the Gray code, coupled with accurate correlation estimation and rate control, significantly improves the picture quality, by up to 4 dB, over the existing methods for distributed image compression.
Study of Thread Level Parallelism in a Video Encoding Application for Chip Multiprocessor Design
NASA Astrophysics Data System (ADS)
Debes, Eric; Kaine, Greg
2002-11-01
In media applications there is a high level of available thread level parallelism (TLP). In this paper we study the intra TLP in a video encoder. We show that a well-distributed highly optimized encoder running on a symmetric multiprocessor (SMP) system can run 3.2 faster on a 4-way SMP machine than on a single processor. The multithreaded encoder running on an SMP system is then used to understand the requirements of a chip multiprocessor (CMP) architecture, which is one possible architectural direction to better exploit TLP. In the framework of this study, we use a software approach to evaluate the dataflow between processors for the video encoder running on an SMP system. An estimation of the dataflow is done with L2 cache miss event counters using Intel® VTuneTM performance analyzer. The experimental measurements are compared to theoretical results.
Information analysis of posterior canal afferents in the turtle, Trachemys scripta elegans.
Rowe, Michael H; Neiman, Alexander B
2012-01-24
We have used sinusoidal and band-limited Gaussian noise stimuli along with information measures to characterize the linear and non-linear responses of morpho-physiologically identified posterior canal (PC) afferents and to examine the relationship between mutual information rate and other physiological parameters. Our major findings are: 1) spike generation in most PC afferents is effectively a stochastic renewal process, and spontaneous discharges are fully characterized by their first order statistics; 2) a regular discharge, as measured by normalized coefficient of variation (cv*), reduces intrinsic noise in afferent discharges at frequencies below the mean firing rate; 3) coherence and mutual information rates, calculated from responses to band-limited Gaussian noise, are jointly determined by gain and intrinsic noise (discharge regularity), the two major determinants of signal to noise ratio in the afferent response; 4) measures of optimal non-linear encoding were only moderately greater than optimal linear encoding, indicating that linear stimulus encoding is limited primarily by internal noise rather than by non-linearities; and 5) a leaky integrate and fire model reproduces these results and supports the suggestion that the combination of high discharge regularity and high discharge rates serves to extend the linear encoding range of afferents to higher frequencies. These results provide a framework for future assessments of afferent encoding of signals generated during natural head movements and for comparison with coding strategies used by other sensory systems. This article is part of a Special Issue entitled: Neural Coding. Copyright © 2011 Elsevier B.V. All rights reserved.
Spiking, Bursting, and Population Dynamics in a Network of Growth Transform Neurons.
Gangopadhyay, Ahana; Chakrabartty, Shantanu
2018-06-01
This paper investigates the dynamical properties of a network of neurons, each of which implements an asynchronous mapping based on polynomial growth transforms. In the first part of this paper, we present a geometric approach for visualizing the dynamics of the network where each of the neurons traverses a trajectory in a dual optimization space, whereas the network itself traverses a trajectory in an equivalent primal optimization space. We show that as the network learns to solve basic classification tasks, different choices of primal-dual mapping produce unique but interpretable neural dynamics like noise shaping, spiking, and bursting. While the proposed framework is general enough, in this paper, we demonstrate its use for designing support vector machines (SVMs) that exhibit noise-shaping properties similar to those of modulators, and for designing SVMs that learn to encode information using spikes and bursts. It is demonstrated that the emergent switching, spiking, and burst dynamics produced by each neuron encodes its respective margin of separation from a classification hyperplane whose parameters are encoded by the network population dynamics. We believe that the proposed growth transform neuron model and the underlying geometric framework could serve as an important tool to connect well-established machine learning algorithms like SVMs to neuromorphic principles like spiking, bursting, population encoding, and noise shaping.
Ohto, C; Ishida, C; Nakane, H; Muramatsu, M; Nishino, T; Obata, S
1999-05-01
Prenyltransferases (prenyl diphosphate synthases), which are a broad group of enzymes that catalyze the consecutive condensation of homoallylic diphosphate of isopentenyl diphosphates (IPP, C5) with allylic diphosphates to synthesize prenyl diphosphates of various chain lengths, have highly conserved regions in their amino acid sequences. Based on the above information, three prenyltransferase homologue genes were cloned from a thermophilic cyanobacterium, Synechococcus elongatus. Through analyses of the reaction products of the enzymes encoded by these genes, it was revealed that one encodes a thermolabile geranylgeranyl (C20) diphosphate synthase, another encodes a farnesyl (C15) diphosphate synthase whose optimal reaction temperature is 60 degrees C, and the third one encodes a prenyltransferase whose optimal reaction temperature is 75 degrees C. The last enzyme could catalyze the synthesis of five prenyl diphosphates of farnesyl, geranylgeranyl, geranylfarnesyl (C25), hexaprenyl (C30), and heptaprenyl (C35) diphosphates from dimethylallyl (C5) diphosphate, geranyl (C10) diphosphate, or farnesyl diphosphate as the allylic substrates. The product specificity of this novel kind of enzyme varied according to the ratio of the allylic and homoallylic substrates. The situations of these three S. elongatus enzymes in a phylogenetic tree of prenyltransferases are discussed in comparison with a mesophilic cyanobacterium of Synechocystis PCC6803, whose complete genome has been reported by Kaneko et al. (1996).
Efficient Parallel Video Processing Techniques on GPU: From Framework to Implementation
Su, Huayou; Wen, Mei; Wu, Nan; Ren, Ju; Zhang, Chunyuan
2014-01-01
Through reorganizing the execution order and optimizing the data structure, we proposed an efficient parallel framework for H.264/AVC encoder based on massively parallel architecture. We implemented the proposed framework by CUDA on NVIDIA's GPU. Not only the compute intensive components of the H.264 encoder are parallelized but also the control intensive components are realized effectively, such as CAVLC and deblocking filter. In addition, we proposed serial optimization methods, including the multiresolution multiwindow for motion estimation, multilevel parallel strategy to enhance the parallelism of intracoding as much as possible, component-based parallel CAVLC, and direction-priority deblocking filter. More than 96% of workload of H.264 encoder is offloaded to GPU. Experimental results show that the parallel implementation outperforms the serial program by 20 times of speedup ratio and satisfies the requirement of the real-time HD encoding of 30 fps. The loss of PSNR is from 0.14 dB to 0.77 dB, when keeping the same bitrate. Through the analysis to the kernels, we found that speedup ratios of the compute intensive algorithms are proportional with the computation power of the GPU. However, the performance of the control intensive parts (CAVLC) is much related to the memory bandwidth, which gives an insight for new architecture design. PMID:24757432
Development of a codon optimization strategy using the efor RED reporter gene as a test case
NASA Astrophysics Data System (ADS)
Yip, Chee-Hoo; Yarkoni, Orr; Ajioka, James; Wan, Kiew-Lian; Nathan, Sheila
2018-04-01
Synthetic biology is a platform that enables high-level synthesis of useful products such as pharmaceutically related drugs, bioplastics and green fuels from synthetic DNA constructs. Large-scale expression of these products can be achieved in an industrial compliant host such as Escherichia coli. To maximise the production of recombinant proteins in a heterologous host, the genes of interest are usually codon optimized based on the codon usage of the host. However, the bioinformatics freeware available for standard codon optimization might not be ideal in determining the best sequence for the synthesis of synthetic DNA. Synthesis of incorrect sequences can prove to be a costly error and to avoid this, a codon optimization strategy was developed based on the E. coli codon usage using the efor RED reporter gene as a test case. This strategy replaces codons encoding for serine, leucine, proline and threonine with the most frequently used codons in E. coli. Furthermore, codons encoding for valine and glycine are substituted with the second highly used codons in E. coli. Both the optimized and original efor RED genes were ligated to the pJS209 plasmid backbone using Gibson Assembly and the recombinant DNAs were transformed into E. coli E. cloni 10G strain. The fluorescence intensity per cell density of the optimized sequence was improved by 20% compared to the original sequence. Hence, the developed codon optimization strategy is proposed when designing an optimal sequence for heterologous protein production in E. coli.
Benyamini, Miri; Zacksenhouse, Miriam
2015-01-01
Recent experiments with brain-machine-interfaces (BMIs) indicate that the extent of neural modulations increased abruptly upon starting to operate the interface, and especially after the monkey stopped moving its hand. In contrast, neural modulations that are correlated with the kinematics of the movement remained relatively unchanged. Here we demonstrate that similar changes are produced by simulated neurons that encode the relevant signals generated by an optimal feedback controller during simulated BMI experiments. The optimal feedback controller relies on state estimation that integrates both visual and proprioceptive feedback with prior estimations from an internal model. The processing required for optimal state estimation and control were conducted in the state-space, and neural recording was simulated by modeling two populations of neurons that encode either only the estimated state or also the control signal. Spike counts were generated as realizations of doubly stochastic Poisson processes with linear tuning curves. The model successfully reconstructs the main features of the kinematics and neural activity during regular reaching movements. Most importantly, the activity of the simulated neurons successfully reproduces the observed changes in neural modulations upon switching to brain control. Further theoretical analysis and simulations indicate that increasing the process noise during normal reaching movement results in similar changes in neural modulations. Thus, we conclude that the observed changes in neural modulations during BMI experiments can be attributed to increasing process noise associated with the imperfect BMI filter, and, more directly, to the resulting increase in the variance of the encoded signals associated with state estimation and the required control signal.
Benyamini, Miri; Zacksenhouse, Miriam
2015-01-01
Recent experiments with brain-machine-interfaces (BMIs) indicate that the extent of neural modulations increased abruptly upon starting to operate the interface, and especially after the monkey stopped moving its hand. In contrast, neural modulations that are correlated with the kinematics of the movement remained relatively unchanged. Here we demonstrate that similar changes are produced by simulated neurons that encode the relevant signals generated by an optimal feedback controller during simulated BMI experiments. The optimal feedback controller relies on state estimation that integrates both visual and proprioceptive feedback with prior estimations from an internal model. The processing required for optimal state estimation and control were conducted in the state-space, and neural recording was simulated by modeling two populations of neurons that encode either only the estimated state or also the control signal. Spike counts were generated as realizations of doubly stochastic Poisson processes with linear tuning curves. The model successfully reconstructs the main features of the kinematics and neural activity during regular reaching movements. Most importantly, the activity of the simulated neurons successfully reproduces the observed changes in neural modulations upon switching to brain control. Further theoretical analysis and simulations indicate that increasing the process noise during normal reaching movement results in similar changes in neural modulations. Thus, we conclude that the observed changes in neural modulations during BMI experiments can be attributed to increasing process noise associated with the imperfect BMI filter, and, more directly, to the resulting increase in the variance of the encoded signals associated with state estimation and the required control signal. PMID:26042002
Hybrid architecture for encoded measurement-based quantum computation
Zwerger, M.; Briegel, H. J.; Dür, W.
2014-01-01
We present a hybrid scheme for quantum computation that combines the modular structure of elementary building blocks used in the circuit model with the advantages of a measurement-based approach to quantum computation. We show how to construct optimal resource states of minimal size to implement elementary building blocks for encoded quantum computation in a measurement-based way, including states for error correction and encoded gates. The performance of the scheme is determined by the quality of the resource states, where within the considered error model a threshold of the order of 10% local noise per particle for fault-tolerant quantum computation and quantum communication. PMID:24946906
Deep Space Network Scheduling Using Evolutionary Computational Methods
NASA Technical Reports Server (NTRS)
Guillaume, Alexandre; Lee, Seugnwon; Wang, Yeou-Fang; Terrile, Richard J.
2007-01-01
The paper presents the specific approach taken to formulate the problem in terms of gene encoding, fitness function, and genetic operations. The genome is encoded such that a subset of the scheduling constraints is automatically satisfied. Several fitness functions are formulated to emphasize different aspects of the scheduling problem. The optimal solutions of the different fitness functions demonstrate the trade-off of the scheduling problem and provide insight into a conflict resolution process.
An Improved Hybrid Encoding Cuckoo Search Algorithm for 0-1 Knapsack Problems
Feng, Yanhong; Jia, Ke; He, Yichao
2014-01-01
Cuckoo search (CS) is a new robust swarm intelligence method that is based on the brood parasitism of some cuckoo species. In this paper, an improved hybrid encoding cuckoo search algorithm (ICS) with greedy strategy is put forward for solving 0-1 knapsack problems. First of all, for solving binary optimization problem with ICS, based on the idea of individual hybrid encoding, the cuckoo search over a continuous space is transformed into the synchronous evolution search over discrete space. Subsequently, the concept of confidence interval (CI) is introduced; hence, the new position updating is designed and genetic mutation with a small probability is introduced. The former enables the population to move towards the global best solution rapidly in every generation, and the latter can effectively prevent the ICS from trapping into the local optimum. Furthermore, the greedy transform method is used to repair the infeasible solution and optimize the feasible solution. Experiments with a large number of KP instances show the effectiveness of the proposed algorithm and its ability to achieve good quality solutions. PMID:24527026
Neurons in the Frontal Lobe Encode the Value of Multiple Decision Variables
Kennerley, Steven W.; Dahmubed, Aspandiar F.; Lara, Antonio H.; Wallis, Jonathan D.
2009-01-01
A central question in behavioral science is how we select among choice alternatives to obtain consistently the most beneficial outcomes. Three variables are particularly important when making a decision: the potential payoff, the probability of success, and the cost in terms of time and effort. A key brain region in decision making is the frontal cortex as damage here impairs the ability to make optimal choices across a range of decision types. We simultaneously recorded the activity of multiple single neurons in the frontal cortex while subjects made choices involving the three aforementioned decision variables. This enabled us to contrast the relative contribution of the anterior cingulate cortex (ACC), the orbito-frontal cortex, and the lateral prefrontal cortex to the decision-making process. Neurons in all three areas encoded value relating to choices involving probability, payoff, or cost manipulations. However, the most significant signals were in the ACC, where neurons encoded multiplexed representations of the three different decision variables. This supports the notion that the ACC is an important component of the neural circuitry underlying optimal decision making. PMID:18752411
Pan, Xiaoyong; Hu, Xiaohua; Zhang, Yu Hang; Feng, Kaiyan; Wang, Shao Peng; Chen, Lei; Huang, Tao; Cai, Yu Dong
2018-04-12
Atrioventricular septal defect (AVSD) is a clinically significant subtype of congenital heart disease (CHD) that severely influences the health of babies during birth and is associated with Down syndrome (DS). Thus, exploring the differences in functional genes in DS samples with and without AVSD is a critical way to investigate the complex association between AVSD and DS. In this study, we present a computational method to distinguish DS patients with AVSD from those without AVSD using the newly proposed self-normalizing neural network (SNN). First, each patient was encoded by using the copy number of probes on chromosome 21. The encoded features were ranked by the reliable Monte Carlo feature selection (MCFS) method to obtain a ranked feature list. Based on this feature list, we used a two-stage incremental feature selection to construct two series of feature subsets and applied SNNs to build classifiers to identify optimal features. Results show that 2737 optimal features were obtained, and the corresponding optimal SNN classifier constructed on optimal features yielded a Matthew's correlation coefficient (MCC) value of 0.748. For comparison, random forest was also used to build classifiers and uncover optimal features. This method received an optimal MCC value of 0.582 when top 132 features were utilized. Finally, we analyzed some key features derived from the optimal features in SNNs found in literature support to further reveal their essential roles.
Eddy current compensated double diffusion encoded (DDE) MRI.
Mueller, Lars; Wetscherek, Andreas; Kuder, Tristan Anselm; Laun, Frederik Bernd
2017-01-01
Eddy currents might lead to image distortions in diffusion-weighted echo planar imaging. A method is proposed to reduce their effects on double diffusion encoding (DDE) MRI experiments and the thereby derived microscopic fractional anisotropy (μFA). The twice-refocused spin echo scheme was adapted for DDE measurements. To assess the effect of individual diffusion encodings on the image distortions, measurements of a grid of plastic rods in water were performed. The effect of eddy current compensation on μFA measurements was evaluated in the brains of six healthy volunteers. The use of an eddy current compensation reduced the signal variation. As expected, the distortions caused by the second encoding were larger than those of the first encoding, entailing a stronger need to compensate for them. For an optimal result, however, both encodings had to be compensated. The artifact reduction strongly improved the measurement of the μFA in ventricles and gray matter by reducing the overestimation. An effect of the compensation on absolute μFA values in white matter was not observed. It is advisable to compensate both encodings in DDE measurements for eddy currents. Magn Reson Med 77:328-335, 2017. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
A protein-dependent side-chain rotamer library.
Bhuyan, Md Shariful Islam; Gao, Xin
2011-12-14
Protein side-chain packing problem has remained one of the key open problems in bioinformatics. The three main components of protein side-chain prediction methods are a rotamer library, an energy function and a search algorithm. Rotamer libraries summarize the existing knowledge of the experimentally determined structures quantitatively. Depending on how much contextual information is encoded, there are backbone-independent rotamer libraries and backbone-dependent rotamer libraries. Backbone-independent libraries only encode sequential information, whereas backbone-dependent libraries encode both sequential and locally structural information. However, side-chain conformations are determined by spatially local information, rather than sequentially local information. Since in the side-chain prediction problem, the backbone structure is given, spatially local information should ideally be encoded into the rotamer libraries. In this paper, we propose a new type of backbone-dependent rotamer library, which encodes structural information of all the spatially neighboring residues. We call it protein-dependent rotamer libraries. Given any rotamer library and a protein backbone structure, we first model the protein structure as a Markov random field. Then the marginal distributions are estimated by the inference algorithms, without doing global optimization or search. The rotamers from the given library are then re-ranked and associated with the updated probabilities. Experimental results demonstrate that the proposed protein-dependent libraries significantly outperform the widely used backbone-dependent libraries in terms of the side-chain prediction accuracy and the rotamer ranking ability. Furthermore, without global optimization/search, the side-chain prediction power of the protein-dependent library is still comparable to the global-search-based side-chain prediction methods.
Evidence for history-dependence of influenza pandemic emergence
NASA Astrophysics Data System (ADS)
Hill, Edward M.; Tildesley, Michael J.; House, Thomas
2017-03-01
Influenza A viruses have caused a number of global pandemics, with considerable mortality in humans. Here, we analyse the time periods between influenza pandemics since 1700 under different assumptions to determine whether the emergence of new pandemic strains is a memoryless or history-dependent process. Bayesian model selection between exponential and gamma distributions for these time periods gives support to the hypothesis of history-dependence under eight out of nine sets of modelling assumptions. Using the fitted parameters to make predictions shows a high level of variability in the modelled number of pandemics from 2010-2110. The approach we take here relies on limited data, so is uncertain, but it provides cheap, safe and direct evidence relating to pandemic emergence, a field where indirect measurements are often made at great risk and cost.
Kusakabe, Tamami; Tatsuke, Tsuneyuki; Tsuruno, Keigo; Hirokawa, Yasutaka; Atsumi, Shota; Liao, James C; Hanai, Taizo
2013-11-01
Production of alternate fuels or chemicals directly from solar energy and carbon dioxide using engineered cyanobacteria is an attractive method to reduce petroleum dependency and minimize carbon emissions. Here, we constructed a synthetic pathway composed of acetyl-CoA acetyl transferase (encoded by thl), acetoacetyl-CoA transferase (encoded by atoAD), acetoacetate decarboxylase (encoded by adc) and secondary alcohol dehydrogenase (encoded by adh) in Synechococcus elongatus strain PCC 7942 to produce isopropanol. The enzyme-coding genes, heterogeneously originating from Clostridium acetobutylicum ATCC 824 (thl and adc), Escherichia coli K-12 MG1655 (atoAD) and Clostridium beijerinckii (adh), were integrated into the S. elongatus genome. Under the optimized production conditions, the engineered cyanobacteria produced 26.5 mg/L of isopropanol after 9 days. © 2013 Published by Elsevier Inc.
In Search of the Optimal Path: How Learners at Task Use an Online Dictionary
ERIC Educational Resources Information Center
Hamel, Marie-Josee
2012-01-01
We have analyzed circa 180 navigation paths followed by six learners while they performed three language encoding tasks at the computer using an online dictionary prototype. Our hypothesis was that learners who follow an "optimal path" while navigating within the dictionary, using its search and look-up functions, would have a high chance of…
Gobin, Oliver C; Schüth, Ferdi
2008-01-01
Genetic algorithms are widely used to solve and optimize combinatorial problems and are more often applied for library design in combinatorial chemistry. Because of their flexibility, however, their implementation can be challenging. In this study, the influence of the representation of solid catalysts on the performance of genetic algorithms was systematically investigated on the basis of a new, constrained, multiobjective, combinatorial test problem with properties common to problems in combinatorial materials science. Constraints were satisfied by penalty functions, repair algorithms, or special representations. The tests were performed using three state-of-the-art evolutionary multiobjective algorithms by performing 100 optimization runs for each algorithm and test case. Experimental data obtained during the optimization of a noble metal-free solid catalyst system active in the selective catalytic reduction of nitric oxide with propene was used to build up a predictive model to validate the results of the theoretical test problem. A significant influence of the representation on the optimization performance was observed. Binary encodings were found to be the preferred encoding in most of the cases, and depending on the experimental test unit, repair algorithms or penalty functions performed best.
Optimization of Light-Harvesting Pigment Improves Photosynthetic Efficiency.
Jin, Honglei; Li, Mengshu; Duan, Sujuan; Fu, Mei; Dong, Xiaoxiao; Liu, Bing; Feng, Dongru; Wang, Jinfa; Wang, Hong-Bin
2016-11-01
Maximizing light capture by light-harvesting pigment optimization represents an attractive but challenging strategy to improve photosynthetic efficiency. Here, we report that loss of a previously uncharacterized gene, HIGH PHOTOSYNTHETIC EFFICIENCY1 (HPE1), optimizes light-harvesting pigments, leading to improved photosynthetic efficiency and biomass production. Arabidopsis (Arabidopsis thaliana) hpe1 mutants show faster electron transport and increased contents of carbohydrates. HPE1 encodes a chloroplast protein containing an RNA recognition motif that directly associates with and regulates the splicing of target RNAs of plastid genes. HPE1 also interacts with other plastid RNA-splicing factors, including CAF1 and OTP51, which share common targets with HPE1. Deficiency of HPE1 alters the expression of nucleus-encoded chlorophyll-related genes, probably through plastid-to-nucleus signaling, causing decreased total content of chlorophyll (a+b) in a limited range but increased chlorophyll a/b ratio. Interestingly, this adjustment of light-harvesting pigment reduces antenna size, improves light capture, decreases energy loss, mitigates photodamage, and enhances photosynthetic quantum yield during photosynthesis. Our findings suggest a novel strategy to optimize light-harvesting pigments that improves photosynthetic efficiency and biomass production in higher plants. © 2016 American Society of Plant Biologists. All Rights Reserved.
Optimization of Light-Harvesting Pigment Improves Photosynthetic Efficiency1[OPEN
Jin, Honglei; Li, Mengshu; Duan, Sujuan; Fu, Mei; Dong, Xiaoxiao; Feng, Dongru; Wang, Jinfa
2016-01-01
Maximizing light capture by light-harvesting pigment optimization represents an attractive but challenging strategy to improve photosynthetic efficiency. Here, we report that loss of a previously uncharacterized gene, HIGH PHOTOSYNTHETIC EFFICIENCY1 (HPE1), optimizes light-harvesting pigments, leading to improved photosynthetic efficiency and biomass production. Arabidopsis (Arabidopsis thaliana) hpe1 mutants show faster electron transport and increased contents of carbohydrates. HPE1 encodes a chloroplast protein containing an RNA recognition motif that directly associates with and regulates the splicing of target RNAs of plastid genes. HPE1 also interacts with other plastid RNA-splicing factors, including CAF1 and OTP51, which share common targets with HPE1. Deficiency of HPE1 alters the expression of nucleus-encoded chlorophyll-related genes, probably through plastid-to-nucleus signaling, causing decreased total content of chlorophyll (a+b) in a limited range but increased chlorophyll a/b ratio. Interestingly, this adjustment of light-harvesting pigment reduces antenna size, improves light capture, decreases energy loss, mitigates photodamage, and enhances photosynthetic quantum yield during photosynthesis. Our findings suggest a novel strategy to optimize light-harvesting pigments that improves photosynthetic efficiency and biomass production in higher plants. PMID:27609860
Layton, Kelvin J; Gallichan, Daniel; Testud, Frederik; Cocosco, Chris A; Welz, Anna M; Barmet, Christoph; Pruessmann, Klaas P; Hennig, Jürgen; Zaitsev, Maxim
2013-09-01
It has recently been demonstrated that nonlinear encoding fields result in a spatially varying resolution. This work develops an automated procedure to design single-shot trajectories that create a local resolution improvement in a region of interest. The technique is based on the design of optimized local k-space trajectories and can be applied to arbitrary hardware configurations that employ any number of linear and nonlinear encoding fields. The trajectories designed in this work are tested with the currently available hardware setup consisting of three standard linear gradients and two quadrupolar encoding fields generated from a custom-built gradient insert. A field camera is used to measure the actual encoding trajectories up to third-order terms, enabling accurate reconstructions of these demanding single-shot trajectories, although the eddy current and concomitant field terms of the gradient insert have not been completely characterized. The local resolution improvement is demonstrated in phantom and in vivo experiments. Copyright © 2012 Wiley Periodicals, Inc.
Reconstruction of Sensory Stimuli Encoded with Integrate-and-Fire Neurons with Random Thresholds
Lazar, Aurel A.; Pnevmatikakis, Eftychios A.
2013-01-01
We present a general approach to the reconstruction of sensory stimuli encoded with leaky integrate-and-fire neurons with random thresholds. The stimuli are modeled as elements of a Reproducing Kernel Hilbert Space. The reconstruction is based on finding a stimulus that minimizes a regularized quadratic optimality criterion. We discuss in detail the reconstruction of sensory stimuli modeled as absolutely continuous functions as well as stimuli with absolutely continuous first-order derivatives. Reconstruction results are presented for stimuli encoded with single as well as a population of neurons. Examples are given that demonstrate the performance of the reconstruction algorithms as a function of threshold variability. PMID:24077610
An Empirical Comparison of Seven Iterative and Evolutionary Function Optimization Heuristics
NASA Technical Reports Server (NTRS)
Baluja, Shumeet
1995-01-01
This report is a repository of the results obtained from a large scale empirical comparison of seven iterative and evolution-based optimization heuristics. Twenty-seven static optimization problems, spanning six sets of problem classes which are commonly explored in genetic algorithm literature, are examined. The problem sets include job-shop scheduling, traveling salesman, knapsack, binpacking, neural network weight optimization, and standard numerical optimization. The search spaces in these problems range from 2368 to 22040. The results indicate that using genetic algorithms for the optimization of static functions does not yield a benefit, in terms of the final answer obtained, over simpler optimization heuristics. Descriptions of the algorithms tested and the encodings of the problems are described in detail for reproducibility.
Are V1 Simple Cells Optimized for Visual Occlusions? A Comparative Study
Bornschein, Jörg; Henniges, Marc; Lücke, Jörg
2013-01-01
Simple cells in primary visual cortex were famously found to respond to low-level image components such as edges. Sparse coding and independent component analysis (ICA) emerged as the standard computational models for simple cell coding because they linked their receptive fields to the statistics of visual stimuli. However, a salient feature of image statistics, occlusions of image components, is not considered by these models. Here we ask if occlusions have an effect on the predicted shapes of simple cell receptive fields. We use a comparative approach to answer this question and investigate two models for simple cells: a standard linear model and an occlusive model. For both models we simultaneously estimate optimal receptive fields, sparsity and stimulus noise. The two models are identical except for their component superposition assumption. We find the image encoding and receptive fields predicted by the models to differ significantly. While both models predict many Gabor-like fields, the occlusive model predicts a much sparser encoding and high percentages of ‘globular’ receptive fields. This relatively new center-surround type of simple cell response is observed since reverse correlation is used in experimental studies. While high percentages of ‘globular’ fields can be obtained using specific choices of sparsity and overcompleteness in linear sparse coding, no or only low proportions are reported in the vast majority of studies on linear models (including all ICA models). Likewise, for the here investigated linear model and optimal sparsity, only low proportions of ‘globular’ fields are observed. In comparison, the occlusive model robustly infers high proportions and can match the experimentally observed high proportions of ‘globular’ fields well. Our computational study, therefore, suggests that ‘globular’ fields may be evidence for an optimal encoding of visual occlusions in primary visual cortex. PMID:23754938
Optimized Reaction Conditions for Amide Bond Formation in DNA-Encoded Combinatorial Libraries.
Li, Yizhou; Gabriele, Elena; Samain, Florent; Favalli, Nicholas; Sladojevich, Filippo; Scheuermann, Jörg; Neri, Dario
2016-08-08
DNA-encoded combinatorial libraries are increasingly being used as tools for the discovery of small organic binding molecules to proteins of biological or pharmaceutical interest. In the majority of cases, synthetic procedures for the formation of DNA-encoded combinatorial libraries incorporate at least one step of amide bond formation between amino-modified DNA and a carboxylic acid. We investigated reaction conditions and established a methodology by using 1-ethyl-3-(3-(dimethylamino)propyl)carbodiimide, 1-hydroxy-7-azabenzotriazole and N,N'-diisopropylethylamine (EDC/HOAt/DIPEA) in combination, which provided conversions greater than 75% for 423/543 (78%) of the carboxylic acids tested. These reaction conditions were efficient with a variety of primary and secondary amines, as well as with various types of amino-modified oligonucleotides. The reaction conditions, which also worked efficiently over a broad range of DNA concentrations and reaction scales, should facilitate the synthesis of novel DNA-encoded combinatorial libraries.
Remote NMR/MRI detection of laser polarized gases
Pines, Alexander; Saxena, Sunil; Moule, Adam; Spence, Megan; Seeley, Juliette A.; Pierce, Kimberly L.; Han, Song-I; Granwehr, Josef
2006-06-13
An apparatus and method for remote NMR/MRI spectroscopy having an encoding coil with a sample chamber, a supply of signal carriers, preferably hyperpolarized xenon and a detector allowing the spatial and temporal separation of signal preparation and signal detection steps. This separation allows the physical conditions and methods of the encoding and detection steps to be optimized independently. The encoding of the carrier molecules may take place in a high or a low magnetic field and conventional NMR pulse sequences can be split between encoding and detection steps. In one embodiment, the detector is a high magnetic field NMR apparatus. In another embodiment, the detector is a superconducting quantum interference device. A further embodiment uses optical detection of Rb--Xe spin exchange. Another embodiment uses an optical magnetometer using non-linear Faraday rotation. Concentration of the signal carriers in the detector can greatly improve the signal to noise ratio.
Guo, Tianruo; Yang, Chih Yu; Tsai, David; Muralidharan, Madhuvanthi; Suaning, Gregg J.; Morley, John W.; Dokos, Socrates; Lovell, Nigel H.
2018-01-01
The ability for visual prostheses to preferentially activate functionally-distinct retinal ganglion cells (RGCs) is important for improving visual perception. This study investigates the use of high frequency stimulation (HFS) to elicit RGC activation, using a closed-loop algorithm to search for optimal stimulation parameters for preferential ON and OFF RGC activation, resembling natural physiological neural encoding in response to visual stimuli. We evaluated the performance of a wide range of electrical stimulation amplitudes and frequencies on RGC responses in vitro using murine retinal preparations. It was possible to preferentially excite either ON or OFF RGCs by adjusting amplitudes and frequencies in HFS. ON RGCs can be preferentially activated at relatively higher stimulation amplitudes (>150 μA) and frequencies (2–6.25 kHz) while OFF RGCs are activated by lower stimulation amplitudes (40–90 μA) across all tested frequencies (1–6.25 kHz). These stimuli also showed great promise in eliciting RGC responses that parallel natural RGC encoding: ON RGCs exhibited an increase in spiking activity during electrical stimulation while OFF RGCs exhibited decreased spiking activity, given the same stimulation amplitude. In conjunction with the in vitro studies, in silico simulations indicated that optimal HFS parameters could be rapidly identified in practice, whilst sampling spiking activity of relevant neuronal subtypes. This closed-loop approach represents a step forward in modulating stimulation parameters to achieve appropriate neural encoding in retinal prostheses, advancing control over RGC subtypes activated by electrical stimulation. PMID:29615857
Alpha-amylase from the Hyperthermophilic Archaeon Thermococcus thioreducens
NASA Technical Reports Server (NTRS)
Bernhardsdotter, E. C. M. J.; Pusey, M. L.; Ng, M. L.; Garriott, O. K.
2003-01-01
Extremophiles are microorganisms that thrive in, from an anthropocentric view, extreme environments such as hot springs. The ability of survival at extreme conditions has rendered enzymes from extremophiles to be of interest in industrial applications. One approach to producing these extremozymes entails the expression of the enzyme-encoding gene in a mesophilic host such as E.coli. This method has been employed in the effort to produce an alpha-amylase from a hyperthermophile (an organism that displays optimal growth above 80 C) isolated from a hydrothermal vent at the Rainbow vent site in the Atlantic Ocean. alpha-amylases catalyze the hydrolysis of starch to produce smaller sugars and constitute a class of industrial enzymes having approximately 25% of the enzyme market. One application for thermostable alpha-amylases is the starch liquefaction process in which starch is converted into fructose and glucose syrups. The a-amylase encoding gene from the hyperthermophile Thermococcus thioreducens was cloned and sequenced, revealing high similarity with other archaeal hyperthermophilic a-amylases. The gene encoding the mature protein was expressed in E.coli. Initial characterization of this enzyme has revealed an optimal amylolytic activity between 85-90 C and around pH 5.3-6.0.
Multidimensionally encoded magnetic resonance imaging.
Lin, Fa-Hsuan
2013-07-01
Magnetic resonance imaging (MRI) typically achieves spatial encoding by measuring the projection of a q-dimensional object over q-dimensional spatial bases created by linear spatial encoding magnetic fields (SEMs). Recently, imaging strategies using nonlinear SEMs have demonstrated potential advantages for reconstructing images with higher spatiotemporal resolution and reducing peripheral nerve stimulation. In practice, nonlinear SEMs and linear SEMs can be used jointly to further improve the image reconstruction performance. Here, we propose the multidimensionally encoded (MDE) MRI to map a q-dimensional object onto a p-dimensional encoding space where p > q. MDE MRI is a theoretical framework linking imaging strategies using linear and nonlinear SEMs. Using a system of eight surface SEM coils with an eight-channel radiofrequency coil array, we demonstrate the five-dimensional MDE MRI for a two-dimensional object as a further generalization of PatLoc imaging and O-space imaging. We also present a method of optimizing spatial bases in MDE MRI. Results show that MDE MRI with a higher dimensional encoding space can reconstruct images more efficiently and with a smaller reconstruction error when the k-space sampling distribution and the number of samples are controlled. Copyright © 2012 Wiley Periodicals, Inc.
The Deterministic Information Bottleneck
NASA Astrophysics Data System (ADS)
Strouse, D. J.; Schwab, David
2015-03-01
A fundamental and ubiquitous task that all organisms face is prediction of the future based on past sensory experience. Since an individual's memory resources are limited and costly, however, there is a tradeoff between memory cost and predictive payoff. The information bottleneck (IB) method (Tishby, Pereira, & Bialek 2000) formulates this tradeoff as a mathematical optimization problem using an information theoretic cost function. IB encourages storing as few bits of past sensory input as possible while selectively preserving the bits that are most predictive of the future. Here we introduce an alternative formulation of the IB method, which we call the deterministic information bottleneck (DIB). First, we argue for an alternative cost function, which better represents the biologically-motivated goal of minimizing required memory resources. Then, we show that this seemingly minor change has the dramatic effect of converting the optimal memory encoder from stochastic to deterministic. Next, we propose an iterative algorithm for solving the DIB problem. Additionally, we compare the IB and DIB methods on a variety of synthetic datasets, and examine the performance of retinal ganglion cell populations relative to the optimal encoding strategy for each problem.
Quantum-locked key distribution at nearly the classical capacity rate.
Lupo, Cosmo; Lloyd, Seth
2014-10-17
Quantum data locking is a protocol that allows for a small secret key to (un)lock an exponentially larger amount of information, hence yielding the strongest violation of the classical one-time pad encryption in the quantum setting. This violation mirrors a large gap existing between two security criteria for quantum cryptography quantified by two entropic quantities: the Holevo information and the accessible information. We show that the latter becomes a sensible security criterion if an upper bound on the coherence time of the eavesdropper's quantum memory is known. Under this condition, we introduce a protocol for secret key generation through a memoryless qudit channel. For channels with enough symmetry, such as the d-dimensional erasure and depolarizing channels, this protocol allows secret key generation at an asymptotic rate as high as the classical capacity minus one bit.
Stochastic Dynamical Model of a Growing Citation Network Based on a Self-Exciting Point Process
NASA Astrophysics Data System (ADS)
Golosovsky, Michael; Solomon, Sorin
2012-08-01
We put under experimental scrutiny the preferential attachment model that is commonly accepted as a generating mechanism of the scale-free complex networks. To this end we chose a citation network of physics papers and traced the citation history of 40 195 papers published in one year. Contrary to common belief, we find that the citation dynamics of the individual papers follows the superlinear preferential attachment, with the exponent α=1.25-1.3. Moreover, we show that the citation process cannot be described as a memoryless Markov chain since there is a substantial correlation between the present and recent citation rates of a paper. Based on our findings we construct a stochastic growth model of the citation network, perform numerical simulations based on this model and achieve an excellent agreement with the measured citation distributions.
Augmented twin-nonlinear two-box behavioral models for multicarrier LTE power amplifiers.
Hammi, Oualid
2014-01-01
A novel class of behavioral models is proposed for LTE-driven Doherty power amplifiers with strong memory effects. The proposed models, labeled augmented twin-nonlinear two-box models, are built by cascading a highly nonlinear memoryless function with a mildly nonlinear memory polynomial with cross terms. Experimental validation on gallium nitride based Doherty power amplifiers illustrates the accuracy enhancement and complexity reduction achieved by the proposed models. When strong memory effects are observed, the augmented twin-nonlinear two-box models can improve the normalized mean square error by up to 3 dB for the same number of coefficients when compared to state-of-the-art twin-nonlinear two-box models. Furthermore, the augmented twin-nonlinear two-box models lead to the same performance as previously reported twin-nonlinear two-box models while requiring up to 80% less coefficients.
Observation uncertainty in reversible Markov chains.
Metzner, Philipp; Weber, Marcus; Schütte, Christof
2010-09-01
In many applications one is interested in finding a simplified model which captures the essential dynamical behavior of a real life process. If the essential dynamics can be assumed to be (approximately) memoryless then a reasonable choice for a model is a Markov model whose parameters are estimated by means of Bayesian inference from an observed time series. We propose an efficient Monte Carlo Markov chain framework to assess the uncertainty of the Markov model and related observables. The derived Gibbs sampler allows for sampling distributions of transition matrices subject to reversibility and/or sparsity constraints. The performance of the suggested sampling scheme is demonstrated and discussed for a variety of model examples. The uncertainty analysis of functions of the Markov model under investigation is discussed in application to the identification of conformations of the trialanine molecule via Robust Perron Cluster Analysis (PCCA+) .
Time Domain Stability Margin Assessment Method
NASA Technical Reports Server (NTRS)
Clements, Keith
2017-01-01
The baseline stability margins for NASA's Space Launch System (SLS) launch vehicle were generated via the classical approach of linearizing the system equations of motion and determining the gain and phase margins from the resulting frequency domain model. To improve the fidelity of the classical methods, the linear frequency domain approach can be extended by replacing static, memoryless nonlinearities with describing functions. This technique, however, does not address the time varying nature of the dynamics of a launch vehicle in flight. An alternative technique for the evaluation of the stability of the nonlinear launch vehicle dynamics along its trajectory is to incrementally adjust the gain and/or time delay in the time domain simulation until the system exhibits unstable behavior. This technique has the added benefit of providing a direct comparison between the time domain and frequency domain tools in support of simulation validation.
Time-Domain Stability Margin Assessment
NASA Technical Reports Server (NTRS)
Clements, Keith
2016-01-01
The baseline stability margins for NASA's Space Launch System (SLS) launch vehicle were generated via the classical approach of linearizing the system equations of motion and determining the gain and phase margins from the resulting frequency domain model. To improve the fidelity of the classical methods, the linear frequency domain approach can be extended by replacing static, memoryless nonlinearities with describing functions. This technique, however, does not address the time varying nature of the dynamics of a launch vehicle in flight. An alternative technique for the evaluation of the stability of the nonlinear launch vehicle dynamics along its trajectory is to incrementally adjust the gain and/or time delay in the time domain simulation until the system exhibits unstable behavior. This technique has the added benefit of providing a direct comparison between the time domain and frequency domain tools in support of simulation validation.
Analysis and Synthesis of Memory-Based Fuzzy Sliding Mode Controllers.
Zhang, Jinhui; Lin, Yujuan; Feng, Gang
2015-12-01
This paper addresses the sliding mode control problem for a class of Takagi-Sugeno fuzzy systems with matched uncertainties. Different from the conventional memoryless sliding surface, a memory-based sliding surface is proposed which consists of not only the current state but also the delayed state. Both robust and adaptive fuzzy sliding mode controllers are designed based on the proposed memory-based sliding surface. It is shown that the sliding surface can be reached and the closed-loop control system is asymptotically stable. Furthermore, to reduce the chattering, some continuous sliding mode controllers are also presented. Finally, the ball and beam system is used to illustrate the advantages and effectiveness of the proposed approaches. It can be seen that, with the proposed control approaches, not only can the stability be guaranteed, but also its transient performance can be improved significantly.
NASA Astrophysics Data System (ADS)
Tang, Zhiyuan; Liao, Zhongfa; Xu, Feihu; Qi, Bing; Qian, Li; Lo, Hoi-Kwong
2014-05-01
We demonstrate the first implementation of polarization encoding measurement-device-independent quantum key distribution (MDI-QKD), which is immune to all detector side-channel attacks. Active phase randomization of each individual pulse is implemented to protect against attacks on imperfect sources. By optimizing the parameters in the decoy state protocol, we show that it is feasible to implement polarization encoding MDI-QKD with commercial off-the-shelf devices. A rigorous finite key analysis is applied to estimate the secure key rate. Our work paves the way for the realization of a MDI-QKD network, in which the users only need compact and low-cost state-preparation devices and can share complicated and expensive detectors provided by an untrusted network server.
NASA Astrophysics Data System (ADS)
Tang, Li-Chuan; Hu, Guang W.; Russell, Kendra L.; Chang, Chen S.; Chang, Chi Ching
2000-10-01
We propose a new holographic memory scheme based on random phase-encoded multiplexing in a photorefractive LiNbO3:Fe crystal. Experimental results show that rotating a diffuser placed as a random phase modulator in the path of the reference beam provides a simple yet effective method of increasing the holographic storage capabilities of the crystal. Combining this rotational multiplexing with angular multiplexing offers further advantages. Storage capabilities can be optimized by using a post-image random phase plate in the path of the object beam. The technique is applied to a triple phase-encoded optical security system that takes advantage of the high angular selectivity of the angular-rotational multiplexing components.
Tang, Zhiyuan; Liao, Zhongfa; Xu, Feihu; Qi, Bing; Qian, Li; Lo, Hoi-Kwong
2014-05-16
We demonstrate the first implementation of polarization encoding measurement-device-independent quantum key distribution (MDI-QKD), which is immune to all detector side-channel attacks. Active phase randomization of each individual pulse is implemented to protect against attacks on imperfect sources. By optimizing the parameters in the decoy state protocol, we show that it is feasible to implement polarization encoding MDI-QKD with commercial off-the-shelf devices. A rigorous finite key analysis is applied to estimate the secure key rate. Our work paves the way for the realization of a MDI-QKD network, in which the users only need compact and low-cost state-preparation devices and can share complicated and expensive detectors provided by an untrusted network server.
Kaulfuß, Meike; Wensing, Ina; Windmann, Sonja; Hrycak, Camilla Patrizia; Bayer, Wibke
2017-02-06
In the Friend retrovirus mouse model we developed potent adenovirus-based vaccines that were designed to induce either strong Friend virus GagL 85-93 -specific CD8 + T cell or antibody responses, respectively. To optimize the immunization outcome we evaluated vaccination strategies using combinations of these vaccines. While the vaccines on their own confer strong protection from a subsequent Friend virus challenge, the simple combination of the vaccines for the establishment of an optimized immunization protocol did not result in a further improvement of vaccine effectivity. We demonstrate that the co-immunization with GagL 85-93 /leader-gag encoding vectors together with envelope-encoding vectors abrogates the induction of GagL 85-93 -specific CD8 + T cells, and in successive immunization protocols the immunization with the GagL 85-93 /leader-gag encoding vector had to precede the immunization with an envelope encoding vector for the efficient induction of GagL 85-93 -specific CD8 + T cells. Importantly, the antibody response to envelope was in fact enhanced when the mice were adenovirus-experienced from a prior immunization, highlighting the expedience of this approach. To circumvent the immunosuppressive effect of envelope on immune responses to simultaneously or subsequently administered immunogens, we developed a two immunizations-based vaccination protocol that induces strong immune responses and confers robust protection of highly Friend virus-susceptible mice from a lethal Friend virus challenge.
Bandwidth reduction for video-on-demand broadcasting using secondary content insertion
NASA Astrophysics Data System (ADS)
Golynski, Alexander; Lopez-Ortiz, Alejandro; Poirier, Guillaume; Quimper, Claude-Guy
2005-01-01
An optimal broadcasting scheme under the presence of secondary content (i.e. advertisements) is proposed. The proposed scheme works both for movies encoded in a Constant Bit Rate (CBR) or a Variable Bit Rate (VBR) format. It is shown experimentally that secondary content in movies can make Video-on-Demand (VoD) broadcasting systems more efficient. An efficient algorithm is given to compute the optimal broadcasting schedule with secondary content, which in particular significantly improves over the best previously known algorithm for computing the optimal broadcasting schedule without secondary content.
Nieznański, Marek
2014-10-01
According to many theoretical accounts, reinstating study context at the time of test creates optimal circumstances for item retrieval. The role of context reinstatement was tested in reference to context memory in several experiments. On the encoding phase, participants were presented with words printed in two different font colors (intrinsic context) or two different sides of the computer screen (extrinsic context). At test, the context was reinstated or changed and participants were asked to recognize words and recollect their study context. Moreover, a read-generate manipulation was introduced at encoding and retrieval, which was intended to influence the relative salience of item and context information. The results showed that context reinstatement had no effect on memory for extrinsic context but affected memory for intrinsic context when the item was generated at encoding and read at test. These results supported the hypothesis that context information is reconstructed at retrieval only when context was poorly encoded at study. © 2014 Scandinavian Psychological Associations and John Wiley & Sons Ltd.
Wang, Kun; Matthews, Thomas; Anis, Fatima; Li, Cuiping; Duric, Neb; Anastasio, Mark A
2015-03-01
Ultrasound computed tomography (USCT) holds great promise for improving the detection and management of breast cancer. Because they are based on the acoustic wave equation, waveform inversion-based reconstruction methods can produce images that possess improved spatial resolution properties over those produced by ray-based methods. However, waveform inversion methods are computationally demanding and have not been applied widely in USCT breast imaging. In this work, source encoding concepts are employed to develop an accelerated USCT reconstruction method that circumvents the large computational burden of conventional waveform inversion methods. This method, referred to as the waveform inversion with source encoding (WISE) method, encodes the measurement data using a random encoding vector and determines an estimate of the sound speed distribution by solving a stochastic optimization problem by use of a stochastic gradient descent algorithm. Both computer simulation and experimental phantom studies are conducted to demonstrate the use of the WISE method. The results suggest that the WISE method maintains the high spatial resolution of waveform inversion methods while significantly reducing the computational burden.
Kaufman, Howard L; Bines, Steven D
2010-06-01
There are few effective treatment options available for patients with advanced melanoma. An oncolytic herpes simplex virus type 1 encoding granulocyte macrophage colony-stimulating factor (GM-CSF; Oncovex(GM-CSF)) for direct injection into accessible melanoma lesions resulted in a 28% objective response rate in a Phase II clinical trial. Responding patients demonstrated regression of both injected and noninjected lesions highlighting the dual mechanism of action of Oncovex(GM-CSF) that includes both a direct oncolytic effect in injected tumors and a secondary immune-mediated anti-tumor effect on noninjected tumors. Based on these preliminary results a prospective, randomized Phase III clinical trial in patients with unresectable Stage IIIb or c and Stage IV melanoma has been initiated. The rationale, study design, end points and future development of the Oncovex(GM-CSF) Pivotal Trial in Melanoma (OPTIM) trial are discussed in this article.
Ghost artifact cancellation using phased array processing.
Kellman, P; McVeigh, E R
2001-08-01
In this article, a method for phased array combining is formulated which may be used to cancel ghosts caused by a variety of distortion mechanisms, including space variant distortions such as local flow or off-resonance. This method is based on a constrained optimization, which optimizes SNR subject to the constraint of nulling ghost artifacts at known locations. The resultant technique is similar to the method known as sensitivity encoding (SENSE) used for accelerated imaging; however, in this formulation it is applied to full field-of-view (FOV) images. The method is applied to multishot EPI with noninterleaved phase encode acquisition. A number of benefits, as compared to the conventional interleaved approach, are reduced distortion due to off-resonance, in-plane flow, and EPI delay misalignment, as well as eliminating the need for echo-shifting. Experimental results demonstrate the cancellation for both phantom as well as cardiac imaging examples.
Ghost Artifact Cancellation Using Phased Array Processing
Kellman, Peter; McVeigh, Elliot R.
2007-01-01
In this article, a method for phased array combining is formulated which may be used to cancel ghosts caused by a variety of distortion mechanisms, including space variant distortions such as local flow or off-resonance. This method is based on a constrained optimization, which optimizes SNR subject to the constraint of nulling ghost artifacts at known locations. The resultant technique is similar to the method known as sensitivity encoding (SENSE) used for accelerated imaging; however, in this formulation it is applied to full field-of-view (FOV) images. The method is applied to multishot EPI with noninterleaved phase encode acquisition. A number of benefits, as compared to the conventional interleaved approach, are reduced distortion due to off-resonance, in-plane flow, and EPI delay misalignment, as well as eliminating the need for echo-shifting. Experimental results demonstrate the cancellation for both phantom as well as cardiac imaging examples. PMID:11477638
Bellomo, Guido; Bosyk, Gustavo M; Holik, Federico; Zozor, Steeve
2017-11-07
Based on the problem of quantum data compression in a lossless way, we present here an operational interpretation for the family of quantum Rényi entropies. In order to do this, we appeal to a very general quantum encoding scheme that satisfies a quantum version of the Kraft-McMillan inequality. Then, in the standard situation, where one is intended to minimize the usual average length of the quantum codewords, we recover the known results, namely that the von Neumann entropy of the source bounds the average length of the optimal codes. Otherwise, we show that by invoking an exponential average length, related to an exponential penalization over large codewords, the quantum Rényi entropies arise as the natural quantities relating the optimal encoding schemes with the source description, playing an analogous role to that of von Neumann entropy.
Encinas, Lourdes; O'Keefe, Heather; Neu, Margarete; Remuiñán, Modesto J; Patel, Amish M; Guardia, Ana; Davie, Christopher P; Pérez-Macías, Natalia; Yang, Hongfang; Convery, Maire A; Messer, Jeff A; Pérez-Herrán, Esther; Centrella, Paolo A; Alvarez-Gómez, Daniel; Clark, Matthew A; Huss, Sophie; O'Donovan, Gary K; Ortega-Muro, Fátima; McDowell, William; Castañeda, Pablo; Arico-Muendel, Christopher C; Pajk, Stane; Rullás, Joaquín; Angulo-Barturen, Iñigo; Alvarez-Ruíz, Emilio; Mendoza-Losana, Alfonso; Ballell Pages, Lluís; Castro-Pichel, Julia; Evindar, Ghotas
2014-02-27
Tuberculosis (TB) is one of the world's oldest and deadliest diseases, killing a person every 20 s. InhA, the enoyl-ACP reductase from Mycobacterium tuberculosis, is the target of the frontline antitubercular drug isoniazid (INH). Compounds that directly target InhA and do not require activation by mycobacterial catalase peroxidase KatG are promising candidates for treating infections caused by INH resistant strains. The application of the encoded library technology (ELT) to the discovery of direct InhA inhibitors yielded compound 7 endowed with good enzymatic potency but with low antitubercular potency. This work reports the hit identification, the selected strategy for potency optimization, the structure-activity relationships of a hundred analogues synthesized, and the results of the in vivo efficacy studies performed with the lead compound 65.
Solving Open Job-Shop Scheduling Problems by SAT Encoding
NASA Astrophysics Data System (ADS)
Koshimura, Miyuki; Nabeshima, Hidetomo; Fujita, Hiroshi; Hasegawa, Ryuzo
This paper tries to solve open Job-Shop Scheduling Problems (JSSP) by translating them into Boolean Satisfiability Testing Problems (SAT). The encoding method is essentially the same as the one proposed by Crawford and Baker. The open problems are ABZ8, ABZ9, YN1, YN2, YN3, and YN4. We proved that the best known upper bounds 678 of ABZ9 and 884 of YN1 are indeed optimal. We also improved the upper bound of YN2 and lower bounds of ABZ8, YN2, YN3 and YN4.
High-level expression of a synthetic gene encoding a sweet protein, monellin, in Escherichia coli.
Chen, Zhongjun; Cai, Heng; Lu, Fuping; Du, Lianxiang
2005-11-01
The expression of a synthetic gene encoding monellin, a sweet protein, in E. coli under the control of T7 promoter from phage is described. The single-chain monellin gene was designed based on the biased codons of E. coli so as to optimize its expression. Monellin was produced and accounted for 45% of total soluble proteins. It was purified to yield 43 mg protein per g dry cell wt. The purity of the recombinant protein was confirmed by SDS-PAGE.
Optimal entangling operations between deterministic blocks of qubits encoded into single photons
NASA Astrophysics Data System (ADS)
Smith, Jake A.; Kaplan, Lev
2018-01-01
Here, we numerically simulate probabilistic elementary entangling operations between rail-encoded photons for the purpose of scalable universal quantum computation or communication. We propose grouping logical qubits into single-photon blocks wherein single-qubit rotations and the controlled-not (cnot) gate are fully deterministic and simple to implement. Interblock communication is then allowed through said probabilistic entangling operations. We find a promising trend in the increasing probability of successful interblock communication as we increase the number of optical modes operated on by our elementary entangling operations.
Visual short-term memory: activity supporting encoding and maintenance in retinotopic visual cortex.
Sneve, Markus H; Alnæs, Dag; Endestad, Tor; Greenlee, Mark W; Magnussen, Svein
2012-10-15
Recent studies have demonstrated that retinotopic cortex maintains information about visual stimuli during retention intervals. However, the process by which transient stimulus-evoked sensory responses are transformed into enduring memory representations is unknown. Here, using fMRI and short-term visual memory tasks optimized for univariate and multivariate analysis approaches, we report differential involvement of human retinotopic areas during memory encoding of the low-level visual feature orientation. All visual areas show weaker responses when memory encoding processes are interrupted, possibly due to effects in orientation-sensitive primary visual cortex (V1) propagating across extrastriate areas. Furthermore, intermediate areas in both dorsal (V3a/b) and ventral (LO1/2) streams are significantly more active during memory encoding compared with non-memory (active and passive) processing of the same stimulus material. These effects in intermediate visual cortex are also observed during memory encoding of a different stimulus feature (spatial frequency), suggesting that these areas are involved in encoding processes on a higher level of representation. Using pattern-classification techniques to probe the representational content in visual cortex during delay periods, we further demonstrate that simply initiating memory encoding is not sufficient to produce long-lasting memory traces. Rather, active maintenance appears to underlie the observed memory-specific patterns of information in retinotopic cortex. Copyright © 2012 Elsevier Inc. All rights reserved.
A novel diagnosis method for a Hall plates-based rotary encoder with a magnetic concentrator.
Meng, Bumin; Wang, Yaonan; Sun, Wei; Yuan, Xiaofang
2014-07-31
In the last few years, rotary encoders based on two-dimensional complementary metal oxide semiconductors (CMOS) Hall plates with a magnetic concentrator have been developed to measure contactless absolute angle. There are various error factors influencing the measuring accuracy, which are difficult to locate after the assembly of encoder. In this paper, a model-based rapid diagnosis method is presented. Based on an analysis of the error mechanism, an error model is built to compare minimum residual angle error and to quantify the error factors. Additionally, a modified particle swarm optimization (PSO) algorithm is used to reduce the calculated amount. The simulation and experimental results show that this diagnosis method is feasible to quantify the causes of the error and to reduce iteration significantly.
Efficiency turns the table on neural encoding, decoding and noise.
Deneve, Sophie; Chalk, Matthew
2016-04-01
Sensory neurons are usually described with an encoding model, for example, a function that predicts their response from the sensory stimulus using a receptive field (RF) or a tuning curve. However, central to theories of sensory processing is the notion of 'efficient coding'. We argue here that efficient coding implies a completely different neural coding strategy. Instead of a fixed encoding model, neural populations would be described by a fixed decoding model (i.e. a model reconstructing the stimulus from the neural responses). Because the population solves a global optimization problem, individual neurons are variable, but not noisy, and have no truly invariant tuning curve or receptive field. We review recent experimental evidence and implications for neural noise correlations, robustness and adaptation. Copyright © 2016. Published by Elsevier Ltd.
Efficient Encoding and Rendering of Time-Varying Volume Data
NASA Technical Reports Server (NTRS)
Ma, Kwan-Liu; Smith, Diann; Shih, Ming-Yun; Shen, Han-Wei
1998-01-01
Visualization of time-varying volumetric data sets, which may be obtained from numerical simulations or sensing instruments, provides scientists insights into the detailed dynamics of the phenomenon under study. This paper describes a coherent solution based on quantization, coupled with octree and difference encoding for visualizing time-varying volumetric data. Quantization is used to attain voxel-level compression and may have a significant influence on the performance of the subsequent encoding and visualization steps. Octree encoding is used for spatial domain compression, and difference encoding for temporal domain compression. In essence, neighboring voxels may be fused into macro voxels if they have similar values, and subtrees at consecutive time steps may be merged if they are identical. The software rendering process is tailored according to the tree structures and the volume visualization process. With the tree representation, selective rendering may be performed very efficiently. Additionally, the I/O costs are reduced. With these combined savings, a higher level of user interactivity is achieved. We have studied a variety of time-varying volume datasets, performed encoding based on data statistics, and optimized the rendering calculations wherever possible. Preliminary tests on workstations have shown in many cases tremendous reduction by as high as 90% in both storage space and inter-frame delay.
Brain computer interface to enhance episodic memory in human participants
Burke, John F.; Merkow, Maxwell B.; Jacobs, Joshua; Kahana, Michael J.
2015-01-01
Recent research has revealed that neural oscillations in the theta (4–8 Hz) and alpha (9–14 Hz) bands are predictive of future success in memory encoding. Because these signals occur before the presentation of an upcoming stimulus, they are considered stimulus-independent in that they correlate with enhanced memory encoding independent of the item being encoded. Thus, such stimulus-independent activity has important implications for the neural mechanisms underlying episodic memory as well as the development of cognitive neural prosthetics. Here, we developed a brain computer interface (BCI) to test the ability of such pre-stimulus activity to modulate subsequent memory encoding. We recorded intracranial electroencephalography (iEEG) in neurosurgical patients as they performed a free recall memory task, and detected iEEG theta and alpha oscillations that correlated with optimal memory encoding. We then used these detected oscillatory changes to trigger the presentation of items in the free recall task. We found that item presentation contingent upon the presence of pre-stimulus theta and alpha oscillations modulated memory performance in more sessions than expected by chance. Our results suggest that an electrophysiological signal may be causally linked to a specific behavioral condition, and contingent stimulus presentation has the potential to modulate human memory encoding. PMID:25653605
Minimum variance optimal rate allocation for multiplexed H.264/AVC bitstreams.
Tagliasacchi, Marco; Valenzise, Giuseppe; Tubaro, Stefano
2008-07-01
Consider the problem of transmitting multiple video streams to fulfill a constant bandwidth constraint. The available bit budget needs to be distributed across the sequences in order to meet some optimality criteria. For example, one might want to minimize the average distortion or, alternatively, minimize the distortion variance, in order to keep almost constant quality among the encoded sequences. By working in the rho-domain, we propose a low-delay rate allocation scheme that, at each time instant, provides a closed form solution for either the aforementioned problems. We show that minimizing the distortion variance instead of the average distortion leads, for each of the multiplexed sequences, to a coding penalty less than 0.5 dB, in terms of average PSNR. In addition, our analysis provides an explicit relationship between model parameters and this loss. In order to smooth the distortion also along time, we accommodate a shared encoder buffer to compensate for rate fluctuations. Although the proposed scheme is general, and it can be adopted for any video and image coding standard, we provide experimental evidence by transcoding bitstreams encoded using the state-of-the-art H.264/AVC standard. The results of our simulations reveal that is it possible to achieve distortion smoothing both in time and across the sequences, without sacrificing coding efficiency.
Adaptation to changes in higher-order stimulus statistics in the salamander retina.
Tkačik, Gašper; Ghosh, Anandamohan; Schneidman, Elad; Segev, Ronen
2014-01-01
Adaptation in the retina is thought to optimize the encoding of natural light signals into sequences of spikes sent to the brain. While adaptive changes in retinal processing to the variations of the mean luminance level and second-order stimulus statistics have been documented before, no such measurements have been performed when higher-order moments of the light distribution change. We therefore measured the ganglion cell responses in the tiger salamander retina to controlled changes in the second (contrast), third (skew) and fourth (kurtosis) moments of the light intensity distribution of spatially uniform temporally independent stimuli. The skew and kurtosis of the stimuli were chosen to cover the range observed in natural scenes. We quantified adaptation in ganglion cells by studying linear-nonlinear models that capture well the retinal encoding properties across all stimuli. We found that the encoding properties of retinal ganglion cells change only marginally when higher-order statistics change, compared to the changes observed in response to the variation in contrast. By analyzing optimal coding in LN-type models, we showed that neurons can maintain a high information rate without large dynamic adaptation to changes in skew or kurtosis. This is because, for uncorrelated stimuli, spatio-temporal summation within the receptive field averages away non-gaussian aspects of the light intensity distribution.
The Dopaminergic Midbrain Encodes the Expected Certainty about Desired Outcomes.
Schwartenbeck, Philipp; FitzGerald, Thomas H B; Mathys, Christoph; Dolan, Ray; Friston, Karl
2015-10-01
Dopamine plays a key role in learning; however, its exact function in decision making and choice remains unclear. Recently, we proposed a generic model based on active (Bayesian) inference wherein dopamine encodes the precision of beliefs about optimal policies. Put simply, dopamine discharges reflect the confidence that a chosen policy will lead to desired outcomes. We designed a novel task to test this hypothesis, where subjects played a "limited offer" game in a functional magnetic resonance imaging experiment. Subjects had to decide how long to wait for a high offer before accepting a low offer, with the risk of losing everything if they waited too long. Bayesian model comparison showed that behavior strongly supported active inference, based on surprise minimization, over classical utility maximization schemes. Furthermore, midbrain activity, encompassing dopamine projection neurons, was accurately predicted by trial-by-trial variations in model-based estimates of precision. Our findings demonstrate that human subjects infer both optimal policies and the precision of those inferences, and thus support the notion that humans perform hierarchical probabilistic Bayesian inference. In other words, subjects have to infer both what they should do as well as how confident they are in their choices, where confidence may be encoded by dopaminergic firing. © The Author 2014. Published by Oxford University Press.
The Dopaminergic Midbrain Encodes the Expected Certainty about Desired Outcomes
Schwartenbeck, Philipp; FitzGerald, Thomas H. B.; Mathys, Christoph; Dolan, Ray; Friston, Karl
2015-01-01
Dopamine plays a key role in learning; however, its exact function in decision making and choice remains unclear. Recently, we proposed a generic model based on active (Bayesian) inference wherein dopamine encodes the precision of beliefs about optimal policies. Put simply, dopamine discharges reflect the confidence that a chosen policy will lead to desired outcomes. We designed a novel task to test this hypothesis, where subjects played a “limited offer” game in a functional magnetic resonance imaging experiment. Subjects had to decide how long to wait for a high offer before accepting a low offer, with the risk of losing everything if they waited too long. Bayesian model comparison showed that behavior strongly supported active inference, based on surprise minimization, over classical utility maximization schemes. Furthermore, midbrain activity, encompassing dopamine projection neurons, was accurately predicted by trial-by-trial variations in model-based estimates of precision. Our findings demonstrate that human subjects infer both optimal policies and the precision of those inferences, and thus support the notion that humans perform hierarchical probabilistic Bayesian inference. In other words, subjects have to infer both what they should do as well as how confident they are in their choices, where confidence may be encoded by dopaminergic firing. PMID:25056572
Fidelity of the ensemble code for visual motion in primate retina.
Frechette, E S; Sher, A; Grivich, M I; Petrusca, D; Litke, A M; Chichilnisky, E J
2005-07-01
Sensory experience typically depends on the ensemble activity of hundreds or thousands of neurons, but little is known about how populations of neurons faithfully encode behaviorally important sensory information. We examined how precisely speed of movement is encoded in the population activity of magnocellular-projecting parasol retinal ganglion cells (RGCs) in macaque monkey retina. Multi-electrode recordings were used to measure the activity of approximately 100 parasol RGCs simultaneously in isolated retinas stimulated with moving bars. To examine how faithfully the retina signals motion, stimulus speed was estimated directly from recorded RGC responses using an optimized algorithm that resembles models of motion sensing in the brain. RGC population activity encoded speed with a precision of approximately 1%. The elementary motion signal was conveyed in approximately 10 ms, comparable to the interspike interval. Temporal structure in spike trains provided more precise speed estimates than time-varying firing rates. Correlated activity between RGCs had little effect on speed estimates. The spatial dispersion of RGC receptive fields along the axis of motion influenced speed estimates more strongly than along the orthogonal direction, as predicted by a simple model based on RGC response time variability and optimal pooling. on and off cells encoded speed with similar and statistically independent variability. Simulation of downstream speed estimation using populations of speed-tuned units showed that peak (winner take all) readout provided more precise speed estimates than centroid (vector average) readout. These findings reveal how faithfully the retinal population code conveys information about stimulus speed and the consequences for motion sensing in the brain.
Minguet-Parramona, Carla; Wang, Yizhou; Hills, Adrian; Vialet-Chabrand, Silvere; Griffiths, Howard; Rogers, Simon; Lawson, Tracy; Lew, Virgilio L; Blatt, Michael R
2016-01-01
Oscillations in cytosolic-free Ca(2+) concentration ([Ca(2+)]i) have been proposed to encode information that controls stomatal closure. [Ca(2+)]i oscillations with a period near 10 min were previously shown to be optimal for stomatal closure in Arabidopsis (Arabidopsis thaliana), but the studies offered no insight into their origins or mechanisms of encoding to validate a role in signaling. We have used a proven systems modeling platform to investigate these [Ca(2+)]i oscillations and analyze their origins in guard cell homeostasis and membrane transport. The model faithfully reproduced differences in stomatal closure as a function of oscillation frequency with an optimum period near 10 min under standard conditions. Analysis showed that this optimum was one of a range of frequencies that accelerated closure, each arising from a balance of transport and the prevailing ion gradients across the plasma membrane and tonoplast. These interactions emerge from the experimentally derived kinetics encoded in the model for each of the relevant transporters, without the need of any additional signaling component. The resulting frequencies are of sufficient duration to permit substantial changes in [Ca(2+)]i and, with the accompanying oscillations in voltage, drive the K(+) and anion efflux for stomatal closure. Thus, the frequency optima arise from emergent interactions of transport across the membrane system of the guard cell. Rather than encoding information for ion flux, these oscillations are a by-product of the transport activities that determine stomatal aperture. © 2016 American Society of Plant Biologists. All Rights Reserved.
Minguet-Parramona, Carla; Hills, Adrian; Vialet-Chabrand, Silvere; Griffiths, Howard; Lawson, Tracy; Lew, Virgilio L.; Blatt, Michael R.
2016-01-01
Oscillations in cytosolic-free Ca2+ concentration ([Ca2+]i) have been proposed to encode information that controls stomatal closure. [Ca2+]i oscillations with a period near 10 min were previously shown to be optimal for stomatal closure in Arabidopsis (Arabidopsis thaliana), but the studies offered no insight into their origins or mechanisms of encoding to validate a role in signaling. We have used a proven systems modeling platform to investigate these [Ca2+]i oscillations and analyze their origins in guard cell homeostasis and membrane transport. The model faithfully reproduced differences in stomatal closure as a function of oscillation frequency with an optimum period near 10 min under standard conditions. Analysis showed that this optimum was one of a range of frequencies that accelerated closure, each arising from a balance of transport and the prevailing ion gradients across the plasma membrane and tonoplast. These interactions emerge from the experimentally derived kinetics encoded in the model for each of the relevant transporters, without the need of any additional signaling component. The resulting frequencies are of sufficient duration to permit substantial changes in [Ca2+]i and, with the accompanying oscillations in voltage, drive the K+ and anion efflux for stomatal closure. Thus, the frequency optima arise from emergent interactions of transport across the membrane system of the guard cell. Rather than encoding information for ion flux, these oscillations are a by-product of the transport activities that determine stomatal aperture. PMID:26628748
On a biologically inspired topology optimization method
NASA Astrophysics Data System (ADS)
Kobayashi, Marcelo H.
2010-03-01
This work concerns the development of a biologically inspired methodology for the study of topology optimization in engineering and natural systems. The methodology is based on L systems and its turtle interpretation for the genotype-phenotype modeling of the topology development. The topology is analyzed using the finite element method, and optimized using an evolutionary algorithm with the genetic encoding of the L system and its turtle interpretation, as well as, body shape and physical characteristics. The test cases considered in this work clearly show the suitability of the proposed method for the study of engineering and natural complex systems.
Non-Markovian Complexity in the Quantum-to-Classical Transition
Xiong, Heng-Na; Lo, Ping-Yuan; Zhang, Wei-Min; Feng, Da Hsuan; Nori, Franco
2015-01-01
The quantum-to-classical transition is due to environment-induced decoherence, and it depicts how classical dynamics emerges from quantum systems. Previously, the quantum-to-classical transition has mainly been described with memory-less (Markovian) quantum processes. Here we study the complexity of the quantum-to-classical transition through general non-Markovian memory processes. That is, the influence of various reservoirs results in a given initial quantum state evolving into one of the following four scenarios: thermal state, thermal-like state, quantum steady state, or oscillating quantum nonstationary state. In the latter two scenarios, the system maintains partial or full quantum coherence due to the strong non-Markovian memory effect, so that in these cases, the quantum-to-classical transition never occurs. This unexpected new feature provides a new avenue for the development of future quantum technologies because the remaining quantum oscillations in steady states are decoherence-free. PMID:26303002
Intonation in unaccompanied singing: accuracy, drift, and a model of reference pitch memory.
Mauch, Matthias; Frieler, Klaus; Dixon, Simon
2014-07-01
This paper presents a study on intonation and intonation drift in unaccompanied singing, and proposes a simple model of reference pitch memory that accounts for many of the effects observed. Singing experiments were conducted with 24 singers of varying ability under three conditions (Normal, Masked, Imagined). Over the duration of a recording, ∼50 s, a median absolute intonation drift of 11 cents was observed. While smaller than the median note error (19 cents), drift was significant in 22% of recordings. Drift magnitude did not correlate with other measures of singing accuracy, singing experience, or the presence of conditions tested. Furthermore, it is shown that neither a static intonation memory model nor a memoryless interval-based intonation model can account for the accuracy and drift behavior observed. The proposed causal model provides a better explanation as it treats the reference pitch as a changing latent variable.
Memory effects in nanoparticle dynamics and transport
NASA Astrophysics Data System (ADS)
Sanghi, Tarun; Bhadauria, Ravi; Aluru, N. R.
2016-10-01
In this work, we use the generalized Langevin equation (GLE) to characterize and understand memory effects in nanoparticle dynamics and transport. Using the GLE formulation, we compute the memory function and investigate its scaling with the mass, shape, and size of the nanoparticle. It is observed that changing the mass of the nanoparticle leads to a rescaling of the memory function with the reduced mass of the system. Further, we show that for different mass nanoparticles it is the initial value of the memory function and not its relaxation time that determines the "memory" or "memoryless" dynamics. The size and the shape of the nanoparticle are found to influence both the functional-form and the initial value of the memory function. For a fixed mass nanoparticle, increasing its size enhances the memory effects. Using GLE simulations we also investigate and highlight the role of memory in nanoparticle dynamics and transport.
Communication: Memory effects and active Brownian diffusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghosh, Pulak K.; Li, Yunyun, E-mail: yunyunli@tongji.edu.cn; Marchegiani, Giampiero
A self-propelled artificial microswimmer is often modeled as a ballistic Brownian particle moving with constant speed aligned along one of its axis, but changing direction due to random collisions with the environment. Similarly to thermal noise, its angular randomization is described as a memoryless stochastic process. Here, we speculate that finite-time correlations in the orientational dynamics can affect the swimmer’s diffusivity. To this purpose, we propose and solve two alternative models. In the first one, we simply assume that the environmental fluctuations governing the swimmer’s propulsion are exponentially correlated in time, whereas in the second one, we account for possiblemore » damped fluctuations of the propulsion velocity around the swimmer’s axis. The corresponding swimmer’s diffusion constants are predicted to get, respectively, enhanced or suppressed upon increasing the model memory time. Possible consequences of this effect on the interpretation of the experimental data are discussed.« less
Stochastic goal-oriented error estimation with memory
NASA Astrophysics Data System (ADS)
Ackmann, Jan; Marotzke, Jochem; Korn, Peter
2017-11-01
We propose a stochastic dual-weighted error estimator for the viscous shallow-water equation with boundaries. For this purpose, previous work on memory-less stochastic dual-weighted error estimation is extended by incorporating memory effects. The memory is introduced by describing the local truncation error as a sum of time-correlated random variables. The random variables itself represent the temporal fluctuations in local truncation errors and are estimated from high-resolution information at near-initial times. The resulting error estimator is evaluated experimentally in two classical ocean-type experiments, the Munk gyre and the flow around an island. In these experiments, the stochastic process is adapted locally to the respective dynamical flow regime. Our stochastic dual-weighted error estimator is shown to provide meaningful error bounds for a range of physically relevant goals. We prove, as well as show numerically, that our approach can be interpreted as a linearized stochastic-physics ensemble.
Memoryless self-reinforcing directionality in endosomal active transport within living cells
NASA Astrophysics Data System (ADS)
Chen, Kejia; Wang, Bo; Granick, Steve
2015-06-01
In contrast to Brownian transport, the active motility of microbes, cells, animals and even humans often follows another random process known as truncated Lévy walk. These stochastic motions are characterized by clustered small steps and intermittent longer jumps that often extend towards the size of the entire system. As there are repeated suggestions, although disagreement, that Lévy walks have functional advantages over Brownian motion in random searching and transport kinetics, their intentional engineering into active materials could be useful. Here, we show experimentally in the classic active matter system of intracellular trafficking that Brownian-like steps self-organize into truncated Lévy walks through an apparent time-independent positive feedback such that directional persistence increases with the distance travelled persistently. A molecular model that allows the maximum output of the active propelling forces to fluctuate slowly fits the experiments quantitatively. Our findings offer design principles for programming efficient transport in active materials.
Renormalization group theory for percolation in time-varying networks.
Karschau, Jens; Zimmerling, Marco; Friedrich, Benjamin M
2018-05-22
Motivated by multi-hop communication in unreliable wireless networks, we present a percolation theory for time-varying networks. We develop a renormalization group theory for a prototypical network on a regular grid, where individual links switch stochastically between active and inactive states. The question whether a given source node can communicate with a destination node along paths of active links is equivalent to a percolation problem. Our theory maps the temporal existence of multi-hop paths on an effective two-state Markov process. We show analytically how this Markov process converges towards a memoryless Bernoulli process as the hop distance between source and destination node increases. Our work extends classical percolation theory to the dynamic case and elucidates temporal correlations of message losses. Quantification of temporal correlations has implications for the design of wireless communication and control protocols, e.g. in cyber-physical systems such as self-organized swarms of drones or smart traffic networks.
Memory and modularity in cell-fate decision making
NASA Astrophysics Data System (ADS)
Norman, Thomas M.; Lord, Nathan D.; Paulsson, Johan; Losick, Richard
2013-11-01
Genetically identical cells sharing an environment can display markedly different phenotypes. It is often unclear how much of this variation derives from chance, external signals, or attempts by individual cells to exert autonomous phenotypic programs. By observing thousands of cells for hundreds of consecutive generations under constant conditions, we dissect the stochastic decision between a solitary, motile state and a chained, sessile state in Bacillus subtilis. We show that the motile state is `memoryless', exhibiting no autonomous control over the time spent in the state. In contrast, the time spent as connected chains of cells is tightly controlled, enforcing coordination among related cells in the multicellular state. We show that the three-protein regulatory circuit governing the decision is modular, as initiation and maintenance of chaining are genetically separable functions. As stimulation of the same initiating pathway triggers biofilm formation, we argue that autonomous timing allows a trial commitment to multicellularity that external signals could extend.
Communication: Memory effects and active Brownian diffusion
NASA Astrophysics Data System (ADS)
Ghosh, Pulak K.; Li, Yunyun; Marchegiani, Giampiero; Marchesoni, Fabio
2015-12-01
A self-propelled artificial microswimmer is often modeled as a ballistic Brownian particle moving with constant speed aligned along one of its axis, but changing direction due to random collisions with the environment. Similarly to thermal noise, its angular randomization is described as a memoryless stochastic process. Here, we speculate that finite-time correlations in the orientational dynamics can affect the swimmer's diffusivity. To this purpose, we propose and solve two alternative models. In the first one, we simply assume that the environmental fluctuations governing the swimmer's propulsion are exponentially correlated in time, whereas in the second one, we account for possible damped fluctuations of the propulsion velocity around the swimmer's axis. The corresponding swimmer's diffusion constants are predicted to get, respectively, enhanced or suppressed upon increasing the model memory time. Possible consequences of this effect on the interpretation of the experimental data are discussed.
Augmented Twin-Nonlinear Two-Box Behavioral Models for Multicarrier LTE Power Amplifiers
2014-01-01
A novel class of behavioral models is proposed for LTE-driven Doherty power amplifiers with strong memory effects. The proposed models, labeled augmented twin-nonlinear two-box models, are built by cascading a highly nonlinear memoryless function with a mildly nonlinear memory polynomial with cross terms. Experimental validation on gallium nitride based Doherty power amplifiers illustrates the accuracy enhancement and complexity reduction achieved by the proposed models. When strong memory effects are observed, the augmented twin-nonlinear two-box models can improve the normalized mean square error by up to 3 dB for the same number of coefficients when compared to state-of-the-art twin-nonlinear two-box models. Furthermore, the augmented twin-nonlinear two-box models lead to the same performance as previously reported twin-nonlinear two-box models while requiring up to 80% less coefficients. PMID:24624047
Efficiency or speculation? A time-varying analysis of European sovereign debt
NASA Astrophysics Data System (ADS)
Ferreira, Paulo
2018-01-01
The outbreak of the Greek debt crisis caused turmoil in European markets and drew attention to the problem of public debt and its consequences. The increase in the return rates of sovereign debts was one of these consequences. However, like any other asset, sovereign debt returns are expected to have a memoryless behaviour. Analysing a total of 15 European countries (Eurozone and non-Eurozone), and applying a time-varying analysis of the Hurst exponent, we found evidence of long-range memory in sovereign bonds. When analysing the spreads between each bond and the German one, it is possible to conclude that Eurozone countries' spreads show more evidence of long-range dependence. Considering the Eurozone countries most affected by the Eurozone crisis, that long-range dependence is more evident, but started before the crisis, which could be interpreted as possible speculation by investors.
NASA Astrophysics Data System (ADS)
Ebadi, H.; Saeedian, M.; Ausloos, M.; Jafari, G. R.
2016-11-01
The Boolean network is one successful model to investigate discrete complex systems such as the gene interacting phenomenon. The dynamics of a Boolean network, controlled with Boolean functions, is usually considered to be a Markovian (memory-less) process. However, both self-organizing features of biological phenomena and their intelligent nature should raise some doubt about ignoring the history of their time evolution. Here, we extend the Boolean network Markovian approach: we involve the effect of memory on the dynamics. This can be explored by modifying Boolean functions into non-Markovian functions, for example, by investigating the usual non-Markovian threshold function —one of the most applied Boolean functions. By applying the non-Markovian threshold function on the dynamical process of the yeast cell cycle network, we discover a power-law-like memory with a more robust dynamics than the Markovian dynamics.
NASA Technical Reports Server (NTRS)
Clements, Keith; Wall, John
2017-01-01
The baseline stability margins for NASA's Space Launch System (SLS) launch vehicle were generated via the classical approach of linearizing the system equations of motion and determining the gain and phase margins from the resulting frequency domain model. To improve the fidelity of the classical methods, the linear frequency domain approach can be extended by replacing static, memoryless nonlinearities with describing functions. This technique, however, does not address the time varying nature of the dynamics of a launch vehicle in flight. An alternative technique for the evaluation of the stability of the nonlinear launch vehicle dynamics along its trajectory is to incrementally adjust the gain and/or time delay in the time domain simulation until the system exhibits unstable behavior. This technique has the added benefit of providing a direct comparison between the time domain and frequency domain tools in support of simulation validation.
NASA Technical Reports Server (NTRS)
Clements, Keith; Wall, John
2017-01-01
The baseline stability margins for NASA's Space Launch System (SLS) launch vehicle were generated via the classical approach of linearizing the system equations of motion and determining the gain and phase margins from the resulting frequency domain model. To improve the fidelity of the classical methods, the linear frequency domain approach can be extended by replacing static, memoryless nonlinearities with describing functions. This technique, however, does not address the time varying nature of the dynamics of a launch vehicle in flight. An alternative technique for the evaluation of the stability of the nonlinear launch vehicle dynamics along its trajectory is to incrementally adjust the gain and/or time delay in the time domain simulation until the system exhibits unstable behavior. This technique has the added benefit of providing a direct comparison between the time domain and frequency domain tools in support of simulation validation.
Functional expansion representations of artificial neural networks
NASA Technical Reports Server (NTRS)
Gray, W. Steven
1992-01-01
In the past few years, significant interest has developed in using artificial neural networks to model and control nonlinear dynamical systems. While there exists many proposed schemes for accomplishing this and a wealth of supporting empirical results, most approaches to date tend to be ad hoc in nature and rely mainly on heuristic justifications. The purpose of this project was to further develop some analytical tools for representing nonlinear discrete-time input-output systems, which when applied to neural networks would give insight on architecture selection, pruning strategies, and learning algorithms. A long term goal is to determine in what sense, if any, a neural network can be used as a universal approximator for nonliner input-output maps with memory (i.e., realized by a dynamical system). This property is well known for the case of static or memoryless input-output maps. The general architecture under consideration in this project was a single-input, single-output recurrent feedforward network.
NASA Astrophysics Data System (ADS)
Liu, Wei; Chen, Shu-Ming; Zhang, Jian; Wu, Chun-Wang; Wu, Wei; Chen, Ping-Xing
2015-03-01
It is widely believed that Shor’s factoring algorithm provides a driving force to boost the quantum computing research. However, a serious obstacle to its binary implementation is the large number of quantum gates. Non-binary quantum computing is an efficient way to reduce the required number of elemental gates. Here, we propose optimization schemes for Shor’s algorithm implementation and take a ternary version for factorizing 21 as an example. The optimized factorization is achieved by a two-qutrit quantum circuit, which consists of only two single qutrit gates and one ternary controlled-NOT gate. This two-qutrit quantum circuit is then encoded into the nine lower vibrational states of an ion trapped in a weakly anharmonic potential. Optimal control theory (OCT) is employed to derive the manipulation electric field for transferring the encoded states. The ternary Shor’s algorithm can be implemented in one single step. Numerical simulation results show that the accuracy of the state transformations is about 0.9919. Project supported by the National Natural Science Foundation of China (Grant No. 61205108) and the High Performance Computing (HPC) Foundation of National University of Defense Technology, China.
Phased array ghost elimination.
Kellman, Peter; McVeigh, Elliot R
2006-05-01
Parallel imaging may be applied to cancel ghosts caused by a variety of distortion mechanisms, including distortions such as off-resonance or local flow, which are space variant. Phased array combining coefficients may be calculated that null ghost artifacts at known locations based on a constrained optimization, which optimizes SNR subject to the nulling constraint. The resultant phased array ghost elimination (PAGE) technique is similar to the method known as sensitivity encoding (SENSE) used for accelerated imaging; however, in this formulation is applied to full field-of-view (FOV) images. The phased array method for ghost elimination may result in greater flexibility in designing acquisition strategies. For example, in multi-shot EPI applications ghosts are typically mitigated by the use of an interleaved phase encode acquisition order. An alternative strategy is to use a sequential, non-interleaved phase encode order and cancel the resultant ghosts using PAGE parallel imaging. Cancellation of ghosts by means of phased array processing makes sequential, non-interleaved phase encode acquisition order practical, and permits a reduction in repetition time, TR, by eliminating the need for echo-shifting. Sequential, non-interleaved phase encode order has benefits of reduced distortion due to off-resonance, in-plane flow and EPI delay misalignment. Furthermore, the use of EPI with PAGE has inherent fat-water separation and has been used to provide off-resonance correction using a technique referred to as lipid elimination with an echo-shifting N/2-ghost acquisition (LEENA), and may further generalized using the multi-point Dixon method. Other applications of PAGE include cancelling ghosts which arise due to amplitude or phase variation during the approach to steady state. Parallel imaging requires estimates of the complex coil sensitivities. In vivo estimates may be derived by temporally varying the phase encode ordering to obtain a full k-space dataset in a scheme similar to the autocalibrating TSENSE method. This scheme is a generalization of the UNFOLD method used for removing aliasing in undersampled acquisitions. The more general scheme may be used to modulate each EPI ghost image to a separate temporal frequency as described in this paper. Copyright (c) 2006 John Wiley & Sons, Ltd.
Phased array ghost elimination
Kellman, Peter; McVeigh, Elliot R.
2007-01-01
Parallel imaging may be applied to cancel ghosts caused by a variety of distortion mechanisms, including distortions such as off-resonance or local flow, which are space variant. Phased array combining coefficients may be calculated that null ghost artifacts at known locations based on a constrained optimization, which optimizes SNR subject to the nulling constraint. The resultant phased array ghost elimination (PAGE) technique is similar to the method known as sensitivity encoding (SENSE) used for accelerated imaging; however, in this formulation is applied to full field-of-view (FOV) images. The phased array method for ghost elimination may result in greater flexibility in designing acquisition strategies. For example, in multi-shot EPI applications ghosts are typically mitigated by the use of an interleaved phase encode acquisition order. An alternative strategy is to use a sequential, non-interleaved phase encode order and cancel the resultant ghosts using PAGE parallel imaging. Cancellation of ghosts by means of phased array processing makes sequential, non-interleaved phase encode acquisition order practical, and permits a reduction in repetition time, TR, by eliminating the need for echo-shifting. Sequential, non-interleaved phase encode order has benefits of reduced distortion due to off-resonance, in-plane flow and EPI delay misalignment. Furthermore, the use of EPI with PAGE has inherent fat-water separation and has been used to provide off-resonance correction using a technique referred to as lipid elimination with an echo-shifting N/2-ghost acquisition (LEENA), and may further generalized using the multi-point Dixon method. Other applications of PAGE include cancelling ghosts which arise due to amplitude or phase variation during the approach to steady state. Parallel imaging requires estimates of the complex coil sensitivities. In vivo estimates may be derived by temporally varying the phase encode ordering to obtain a full k-space dataset in a scheme similar to the autocalibrating TSENSE method. This scheme is a generalization of the UNFOLD method used for removing aliasing in undersampled acquisitions. The more general scheme may be used to modulate each EPI ghost image to a separate temporal frequency as described in this paper. PMID:16705636
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steudle, Gesine A.; Knauer, Sebastian; Herzog, Ulrike
2011-05-15
We present an experimental implementation of optimum measurements for quantum state discrimination. Optimum maximum-confidence discrimination and optimum unambiguous discrimination of two mixed single-photon polarization states were performed. For the latter the states of rank 2 in a four-dimensional Hilbert space are prepared using both path and polarization encoding. Linear optics and single photons from a true single-photon source based on a semiconductor quantum dot are utilized.
NASA Astrophysics Data System (ADS)
Chen, Zhenzhong; Han, Junwei; Ngan, King Ngi
2005-10-01
MPEG-4 treats a scene as a composition of several objects or so-called video object planes (VOPs) that are separately encoded and decoded. Such a flexible video coding framework makes it possible to code different video object with different distortion scale. It is necessary to analyze the priority of the video objects according to its semantic importance, intrinsic properties and psycho-visual characteristics such that the bit budget can be distributed properly to video objects to improve the perceptual quality of the compressed video. This paper aims to provide an automatic video object priority definition method based on object-level visual attention model and further propose an optimization framework for video object bit allocation. One significant contribution of this work is that the human visual system characteristics are incorporated into the video coding optimization process. Another advantage is that the priority of the video object can be obtained automatically instead of fixing weighting factors before encoding or relying on the user interactivity. To evaluate the performance of the proposed approach, we compare it with traditional verification model bit allocation and the optimal multiple video object bit allocation algorithms. Comparing with traditional bit allocation algorithms, the objective quality of the object with higher priority is significantly improved under this framework. These results demonstrate the usefulness of this unsupervised subjective quality lifting framework.
Breast ultrasound computed tomography using waveform inversion with source encoding
NASA Astrophysics Data System (ADS)
Wang, Kun; Matthews, Thomas; Anis, Fatima; Li, Cuiping; Duric, Neb; Anastasio, Mark A.
2015-03-01
Ultrasound computed tomography (USCT) holds great promise for improving the detection and management of breast cancer. Because they are based on the acoustic wave equation, waveform inversion-based reconstruction methods can produce images that possess improved spatial resolution properties over those produced by ray-based methods. However, waveform inversion methods are computationally demanding and have not been applied widely in USCT breast imaging. In this work, source encoding concepts are employed to develop an accelerated USCT reconstruction method that circumvents the large computational burden of conventional waveform inversion methods. This method, referred to as the waveform inversion with source encoding (WISE) method, encodes the measurement data using a random encoding vector and determines an estimate of the speed-of-sound distribution by solving a stochastic optimization problem by use of a stochastic gradient descent algorithm. Computer-simulation studies are conducted to demonstrate the use of the WISE method. Using a single graphics processing unit card, each iteration can be completed within 25 seconds for a 128 × 128 mm2 reconstruction region. The results suggest that the WISE method maintains the high spatial resolution of waveform inversion methods while significantly reducing the computational burden.
Fast MPEG-CDVS Encoder With GPU-CPU Hybrid Computing
NASA Astrophysics Data System (ADS)
Duan, Ling-Yu; Sun, Wei; Zhang, Xinfeng; Wang, Shiqi; Chen, Jie; Yin, Jianxiong; See, Simon; Huang, Tiejun; Kot, Alex C.; Gao, Wen
2018-05-01
The compact descriptors for visual search (CDVS) standard from ISO/IEC moving pictures experts group (MPEG) has succeeded in enabling the interoperability for efficient and effective image retrieval by standardizing the bitstream syntax of compact feature descriptors. However, the intensive computation of CDVS encoder unfortunately hinders its widely deployment in industry for large-scale visual search. In this paper, we revisit the merits of low complexity design of CDVS core techniques and present a very fast CDVS encoder by leveraging the massive parallel execution resources of GPU. We elegantly shift the computation-intensive and parallel-friendly modules to the state-of-the-arts GPU platforms, in which the thread block allocation and the memory access are jointly optimized to eliminate performance loss. In addition, those operations with heavy data dependence are allocated to CPU to resolve the extra but non-necessary computation burden for GPU. Furthermore, we have demonstrated the proposed fast CDVS encoder can work well with those convolution neural network approaches which has harmoniously leveraged the advantages of GPU platforms, and yielded significant performance improvements. Comprehensive experimental results over benchmarks are evaluated, which has shown that the fast CDVS encoder using GPU-CPU hybrid computing is promising for scalable visual search.
A new phase encoding approach for a compact head-up display
NASA Astrophysics Data System (ADS)
Suszek, Jaroslaw; Makowski, Michal; Sypek, Maciej; Siemion, Andrzej; Kolodziejczyk, Andrzej; Bartosz, Andrzej
2008-12-01
The possibility of encoding multiple asymmetric symbols into a single thin binary Fourier hologram would have a practical application in the design of simple translucent holographic head-up displays. A Fourier hologram displays the encoded images at the infinity so this enables an observation without a time-consuming eye accommodation. Presenting a set of the most crucial signs for a driver in this way is desired, especially by older people with various eyesight disabilities. In this paper a method of holographic design is presented that assumes a combination of a spatial segmentation and carrier frequencies. It allows to achieve multiple reconstructed images selectable by the angle of the incident laser beam. In order to encode several binary symbols into a single Fourier hologram, the chessboard shaped segmentation function is used. An optimized sequence of phase encoding steps and a final direct phase binarization enables recording of asymmetric symbols into a binary hologram. The theoretical analysis is presented, verified numerically and confirmed in the optical experiment. We suggest and describe a practical and highly useful application of such holograms in an inexpensive HUD device for the use of the automotive industry. We present two alternative propositions of car viewing setups.
Plant, Ewan P; Rakauskaite, Rasa; Taylor, Deborah R; Dinman, Jonathan D
2010-05-01
In retroviruses and the double-stranded RNA totiviruses, the efficiency of programmed -1 ribosomal frameshifting is critical for ensuring the proper ratios of upstream-encoded capsid proteins to downstream-encoded replicase enzymes. The genomic organizations of many other frameshifting viruses, including the coronaviruses, are very different, in that their upstream open reading frames encode nonstructural proteins, the frameshift-dependent downstream open reading frames encode enzymes involved in transcription and replication, and their structural proteins are encoded by subgenomic mRNAs. The biological significance of frameshifting efficiency and how the relative ratios of proteins encoded by the upstream and downstream open reading frames affect virus propagation has not been explored before. Here, three different strategies were employed to test the hypothesis that the -1 PRF signals of coronaviruses have evolved to produce the correct ratios of upstream- to downstream-encoded proteins. Specifically, infectious clones of the severe acute respiratory syndrome (SARS)-associated coronavirus harboring mutations that lower frameshift efficiency decreased infectivity by >4 orders of magnitude. Second, a series of frameshift-promoting mRNA pseudoknot mutants was employed to demonstrate that the frameshift signals of the SARS-associated coronavirus and mouse hepatitis virus have evolved to promote optimal frameshift efficiencies. Finally, we show that a previously described frameshift attenuator element does not actually affect frameshifting per se but rather serves to limit the fraction of ribosomes available for frameshifting. The findings of these analyses all support a "golden mean" model in which viruses use both programmed ribosomal frameshifting and translational attenuation to control the relative ratios of their encoded proteins.
Multichannel Compressive Sensing MRI Using Noiselet Encoding
Pawar, Kamlesh; Egan, Gary; Zhang, Jingxin
2015-01-01
The incoherence between measurement and sparsifying transform matrices and the restricted isometry property (RIP) of measurement matrix are two of the key factors in determining the performance of compressive sensing (CS). In CS-MRI, the randomly under-sampled Fourier matrix is used as the measurement matrix and the wavelet transform is usually used as sparsifying transform matrix. However, the incoherence between the randomly under-sampled Fourier matrix and the wavelet matrix is not optimal, which can deteriorate the performance of CS-MRI. Using the mathematical result that noiselets are maximally incoherent with wavelets, this paper introduces the noiselet unitary bases as the measurement matrix to improve the incoherence and RIP in CS-MRI. Based on an empirical RIP analysis that compares the multichannel noiselet and multichannel Fourier measurement matrices in CS-MRI, we propose a multichannel compressive sensing (MCS) framework to take the advantage of multichannel data acquisition used in MRI scanners. Simulations are presented in the MCS framework to compare the performance of noiselet encoding reconstructions and Fourier encoding reconstructions at different acceleration factors. The comparisons indicate that multichannel noiselet measurement matrix has better RIP than that of its Fourier counterpart, and that noiselet encoded MCS-MRI outperforms Fourier encoded MCS-MRI in preserving image resolution and can achieve higher acceleration factors. To demonstrate the feasibility of the proposed noiselet encoding scheme, a pulse sequences with tailored spatially selective RF excitation pulses was designed and implemented on a 3T scanner to acquire the data in the noiselet domain from a phantom and a human brain. The results indicate that noislet encoding preserves image resolution better than Fouirer encoding. PMID:25965548
Methodology and Method and Apparatus for Signaling with Capacity Optimized Constellations
NASA Technical Reports Server (NTRS)
Barsoum, Maged F. (Inventor); Jones, Christopher R. (Inventor)
2016-01-01
Design Methodology and Method and Apparatus for Signaling with Capacity Optimized Constellation Abstract Communication systems are described that use geometrically PSK shaped constellations that have increased capacity compared to conventional PSK constellations operating within a similar SNR band. The geometrically shaped PSK constellation is optimized based upon parallel decoding capacity. In many embodiments, a capacity optimized geometrically shaped constellation can be used to replace a conventional constellation as part of a firmware upgrade to transmitters and receivers within a communication system. In a number of embodiments, the geometrically shaped constellation is optimized for an Additive White Gaussian Noise channel or a fading channel. In numerous embodiments, the communication uses adaptive rate encoding and the location of points within the geometrically shaped constellation changes as the code rate changes.
Loss resilience for two-qubit state transmission using distributed phase sensitive amplification
Dailey, James; Agarwal, Anjali; Toliver, Paul; ...
2015-11-12
We transmit phase-encoded non-orthogonal quantum states through a 5-km long fibre-based distributed optical phase-sensitive amplifier (OPSA) using telecom-wavelength photonic qubit pairs. The gain is set to equal the transmission loss to probabilistically preserve input states during transmission. While neither state is optimally aligned to the OPSA, each input state is equally amplified with no measurable degradation in state quality. These results promise a new approach to reduce the effects of loss by encoding quantum information in a two-qubit Hilbert space which is designed to benefit from transmission through an OPSA.
Loss resilience for two-qubit state transmission using distributed phase sensitive amplification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dailey, James; Agarwal, Anjali; Toliver, Paul
We transmit phase-encoded non-orthogonal quantum states through a 5-km long fibre-based distributed optical phase-sensitive amplifier (OPSA) using telecom-wavelength photonic qubit pairs. The gain is set to equal the transmission loss to probabilistically preserve input states during transmission. While neither state is optimally aligned to the OPSA, each input state is equally amplified with no measurable degradation in state quality. These results promise a new approach to reduce the effects of loss by encoding quantum information in a two-qubit Hilbert space which is designed to benefit from transmission through an OPSA.
Improvement of encoding and retrieval in normal and pathological aging with word-picture paradigm.
Iodice, Rosario; Meilán, Juan José G; Carro, Juan
2015-01-01
During the aging process, there is a progressive deficit in the encoding of new information and its retrieval. Different strategies are used in order to maintain, optimize or diminish these deficits in people with and without dementia. One of the classic techniques is paired-associate learning (PAL), which is based on improving the encoding of memories, but it has yet to be used to its full potential in people with dementia. In this study, our aim is to corroborate the importance of PAL tasks as instrumental tools for creating contextual cues, during both the encoding and retrieval phases of memory. Additionally, we aim to identify the most effective form of presenting the related items. Pairs of stimuli were shown to healthy elderly people and to patients with moderate and mild Alzheimer's disease. The encoding conditions were as follows: word/word, picture/picture, picture/word, and word/picture. Associative cued recall of the second item in the pair shows that retrieval is higher for the word/picture condition in the two groups of patients with dementia when compared to the other conditions, while word/word is the least effective in all cases. These results confirm that PAL is an effective tool for creating contextual cues during both the encoding and retrieval phases in people with dementia when the items are presented using the word/picture condition. In this way, the encoding and retrieval deficit can be reduced in these people.
Enhancing prospective memory in mild cognitive impairment: The role of enactment.
Pereira, Antonina; de Mendonça, Alexandre; Silva, Dina; Guerreiro, Manuela; Freeman, Jayne; Ellis, Judi
2015-01-01
Prospective memory (PM) is a fundamental requirement for independent living which might be prematurely compromised in the neurodegenerative process, namely in mild cognitive impairment (MCI), a typical prodromal Alzheimer's disease (AD) phase. Most encoding manipulations that typically enhance learning in healthy adults are of minimal benefit to AD patients. However, there is some indication that these can display a recall advantage when encoding is accompanied by the physical enactment of the material. The aim of this study was to explore the potential benefits of enactment at encoding and cue-action relatedness on memory for intentions in MCI patients and healthy controls using a behavioral PM experimental paradigm. We report findings examining the influence of enactment at encoding for PM performance in MCI patients and age- and education-matched controls using a laboratory-based PM task with a factorial independent design. PM performance was consistently superior when physical enactment was used at encoding and when target-action pairs were strongly associated. Importantly, these beneficial effects were cumulative and observable across both a healthy and a cognitively impaired lifespan as well as evident in the perceived subjective difficulty in performing the task. The identified beneficial effects of enacted encoding and semantic relatedness have unveiled the potential contribution of this encoding technique to optimize attentional demands through an adaptive allocation of strategic resources. We discuss our findings with respect to their potential impact on developing strategies to improve PM in AD sufferers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hawes, M.C.
1995-03-01
The objective of this research was to develop a model system to study border cell separation in transgenic pea roots. In addition, the hypothesis that genes encoding pectolytic enzymes in the root cap play a role in the programmed separation of root border cells from the root tip was tested. The following objectives have been accomplished: (1) the use of transgenic hairy roots to study border cell separation has been optimized for Pisum sativum; (2) a cDNA encoding a root cap pectinmethylesterase (PME) has been cloned; (3) PME and polygalacturonase activities in cell walls of the root cap have beenmore » characterized and shown to be correlated with border cell separation. A fusion gene encoding pectate lyase has also been transformed into pea hairy root cells.« less
Deterministic and unambiguous dense coding
NASA Astrophysics Data System (ADS)
Wu, Shengjun; Cohen, Scott M.; Sun, Yuqing; Griffiths, Robert B.
2006-04-01
Optimal dense coding using a partially-entangled pure state of Schmidt rank Dmacr and a noiseless quantum channel of dimension D is studied both in the deterministic case where at most Ld messages can be transmitted with perfect fidelity, and in the unambiguous case where when the protocol succeeds (probability τx ) Bob knows for sure that Alice sent message x , and when it fails (probability 1-τx ) he knows it has failed. Alice is allowed any single-shot (one use) encoding procedure, and Bob any single-shot measurement. For Dmacr ⩽D a bound is obtained for Ld in terms of the largest Schmidt coefficient of the entangled state, and is compared with published results by Mozes [Phys. Rev. A71, 012311 (2005)]. For Dmacr >D it is shown that Ld is strictly less than D2 unless Dmacr is an integer multiple of D , in which case uniform (maximal) entanglement is not needed to achieve the optimal protocol. The unambiguous case is studied for Dmacr ⩽D , assuming τx>0 for a set of Dmacr D messages, and a bound is obtained for the average ⟨1/τ⟩ . A bound on the average ⟨τ⟩ requires an additional assumption of encoding by isometries (unitaries when Dmacr =D ) that are orthogonal for different messages. Both bounds are saturated when τx is a constant independent of x , by a protocol based on one-shot entanglement concentration. For Dmacr >D it is shown that (at least) D2 messages can be sent unambiguously. Whether unitary (isometric) encoding suffices for optimal protocols remains a major unanswered question, both for our work and for previous studies of dense coding using partially-entangled states, including noisy (mixed) states.
Optimization of heterogeneous Bin packing using adaptive genetic algorithm
NASA Astrophysics Data System (ADS)
Sridhar, R.; Chandrasekaran, M.; Sriramya, C.; Page, Tom
2017-03-01
This research is concentrates on a very interesting work, the bin packing using hybrid genetic approach. The optimal and feasible packing of goods for transportation and distribution to various locations by satisfying the practical constraints are the key points in this project work. As the number of boxes for packing can not be predicted in advance and the boxes may not be of same category always. It also involves many practical constraints that are why the optimal packing makes much importance to the industries. This work presents a combinational of heuristic Genetic Algorithm (HGA) for solving Three Dimensional (3D) Single container arbitrary sized rectangular prismatic bin packing optimization problem by considering most of the practical constraints facing in logistic industries. This goal was achieved in this research by optimizing the empty volume inside the container using genetic approach. Feasible packing pattern was achieved by satisfying various practical constraints like box orientation, stack priority, container stability, weight constraint, overlapping constraint, shipment placement constraint. 3D bin packing problem consists of ‘n’ number of boxes being to be packed in to a container of standard dimension in such a way to maximize the volume utilization and in-turn profit. Furthermore, Boxes to be packed may be of arbitrary sizes. The user input data are the number of bins, its size, shape, weight, and constraints if any along with standard container dimension. This user input were stored in the database and encoded to string (chromosomes) format which were normally acceptable by GA. GA operators were allowed to act over these encoded strings for finding the best solution.
Genetically Engineered Cyanobacteria
NASA Technical Reports Server (NTRS)
Zhou, Ruanbao (Inventor); Gibbons, William (Inventor)
2015-01-01
The disclosed embodiments provide cyanobacteria spp. that have been genetically engineered to have increased production of carbon-based products of interest. These genetically engineered hosts efficiently convert carbon dioxide and light into carbon-based products of interest such as long chained hydrocarbons. Several constructs containing polynucleotides encoding enzymes active in the metabolic pathways of cyanobacteria are disclosed. In many instances, the cyanobacteria strains have been further genetically modified to optimize production of the carbon-based products of interest. The optimization includes both up-regulation and down-regulation of particular genes.
Finley, Jason R.; Benjamin, Aaron S.
2012-01-01
Three experiments demonstrated learners’ abilities to adaptively and qualitatively accommodate their encoding strategies to the demands of an upcoming test. Stimuli were word pairs. In Experiment 1, test expectancy was induced for either cued recall (of targets given cues) or free recall (of targets only) across 4 study–test cycles of the same test format, followed by a final critical cycle featuring either the expected or the unexpected test format. For final tests of both cued and free recall, participants who had expected that test format outperformed those who had not. This disordinal interaction, supported by recognition and self-report data, demonstrated not mere differences in effort based on anticipated test difficulty, but rather qualitative and appropriate differences in encoding strategies based on expected task demands. Participants also came to appropriately modulate metacognitive monitoring (Experiment 2) and study-time allocation (Experiment 3) across study–test cycles. Item and associative recognition performance, as well as self-report data, revealed shifts in encoding strategies across trials; these results were used to characterize and evaluate the different strategies that participants employed for cued versus free recall and to assess the optimality of participants’ metacognitive control of encoding strategies. Taken together, these data illustrate a sophisticated form of metacognitive control, in which learners qualitatively shift encoding strategies to match the demands of anticipated tests. PMID:22103783
No reason to expect "reading universals".
Levy, Yonata
2012-10-01
Writing systems encode linguistic information in diverse ways, relying on cognitive procedures that are likely to be general purpose rather than specific to reading. Optimality in reading for meaning is achieved via the entire communicative act, involving, when the need arises, syntax, nonlinguistic context, and selective attention.
Methodology and Method and Apparatus for Signaling With Capacity Optimized Constellations
NASA Technical Reports Server (NTRS)
Barsoum, Maged F. (Inventor); Jones, Christopher R. (Inventor)
2014-01-01
Communication systems are described that use geometrically shaped constellations that have increased capacity compared to conventional constellations operating within a similar SNR band. In several embodiments, the geometrically shaped is optimized based upon a capacity measure such as parallel decoding capacity or joint capacity. In many embodiments, a capacity optimized geometrically shaped constellation can be used to replace a conventional constellation as part of a firmware upgrade to transmitters and receivers within a communication system. In a number of embodiments, the geometrically shaped constellation is optimized for an Additive White Gaussian Noise channel or a fading channel. In numerous embodiments, the communication uses adaptive rate encoding and the location of points within the geometrically shaped constellation changes as the code rate changes.
Robust quantum optimizer with full connectivity.
Nigg, Simon E; Lörch, Niels; Tiwari, Rakesh P
2017-04-01
Quantum phenomena have the potential to speed up the solution of hard optimization problems. For example, quantum annealing, based on the quantum tunneling effect, has recently been shown to scale exponentially better with system size than classical simulated annealing. However, current realizations of quantum annealers with superconducting qubits face two major challenges. First, the connectivity between the qubits is limited, excluding many optimization problems from a direct implementation. Second, decoherence degrades the success probability of the optimization. We address both of these shortcomings and propose an architecture in which the qubits are robustly encoded in continuous variable degrees of freedom. By leveraging the phenomenon of flux quantization, all-to-all connectivity with sufficient tunability to implement many relevant optimization problems is obtained without overhead. Furthermore, we demonstrate the robustness of this architecture by simulating the optimal solution of a small instance of the nondeterministic polynomial-time hard (NP-hard) and fully connected number partitioning problem in the presence of dissipation.
Network placement optimization for large-scale distributed system
NASA Astrophysics Data System (ADS)
Ren, Yu; Liu, Fangfang; Fu, Yunxia; Zhou, Zheng
2018-01-01
The network geometry strongly influences the performance of the distributed system, i.e., the coverage capability, measurement accuracy and overall cost. Therefore the network placement optimization represents an urgent issue in the distributed measurement, even in large-scale metrology. This paper presents an effective computer-assisted network placement optimization procedure for the large-scale distributed system and illustrates it with the example of the multi-tracker system. To get an optimal placement, the coverage capability and the coordinate uncertainty of the network are quantified. Then a placement optimization objective function is developed in terms of coverage capabilities, measurement accuracy and overall cost. And a novel grid-based encoding approach for Genetic algorithm is proposed. So the network placement is optimized by a global rough search and a local detailed search. Its obvious advantage is that there is no need for a specific initial placement. At last, a specific application illustrates this placement optimization procedure can simulate the measurement results of a specific network and design the optimal placement efficiently.
Zha, Jian; Li, Bing-Zhi; Shen, Ming-Hua; Hu, Meng-Long; Song, Hao; Yuan, Ying-Jin
2013-01-01
Production of ethanol and xylitol from lignocellulosic hydrolysates is an alternative to the traditional production of ethanol in utilizing biomass. However, the conversion efficiency of xylose to xylitol is restricted by glucose repression, causing a low xylitol titer. To this end, we cloned genes CDT-1 (encoding a cellodextrin transporter) and gh1-1 (encoding an intracellular β-glucosidase) from Neurospora crassa and XYL1 (encoding a xylose reductase that converts xylose into xylitol) from Scheffersomyces stipitis into Saccharomyces cerevisiae, enabling simultaneous production of ethanol and xylitol from a mixture of cellobiose and xylose (main components of lignocellulosic hydrolysates). We further optimized the expression levels of CDT-1 and XYL1 by manipulating their promoters and copy-numbers, and constructed an engineered S. cerevisiae strain (carrying one copy of PGK1p-CDT1 and two copies of TDH3p-XYL1), which showed an 85.7% increase in xylitol production from the mixture of cellobiose and xylose than that from the mixture of glucose and xylose. Thus, we achieved a balanced co-fermentation of cellobiose (0.165 g/L/h) and xylose (0.162 g/L/h) at similar rates to co-produce ethanol (0.36 g/g) and xylitol (1.00 g/g). PMID:23844185
B1 transmit phase gradient coil for single-axis TRASE RF encoding.
Deng, Qunli; King, Scott B; Volotovskyy, Vyacheslav; Tomanek, Boguslaw; Sharp, Jonathan C
2013-07-01
TRASE (Transmit Array Spatial Encoding) MRI uses RF transmit phase gradients instead of B0 field gradients for k-space traversal and high-resolution MR image formation. Transmit coil performance is a key determinant of TRASE image quality. The purpose of this work is to design an optimized RF transmit phase gradient array for spatial encoding in a transverse direction (x- or y- axis) for a 0.2T vertical B0 field MRI system, using a single transmitter channel. This requires the generation of two transmit B1 RF fields with uniform amplitude and positive and negative linear phase gradients respectively over the imaging volume. A two-element array consisting of a double Maxwell-type coil and a Helmholtz-type coil was designed using 3D field simulations. The phase gradient polarity is set by the relative phase of the RF signals driving the simultaneously energized elements. Field mapping and 1D TRASE imaging experiments confirmed that the constructed coil produced the fields and operated as designed. A substantially larger imaging volume relative to that obtainable from a non-optimized Maxwell-Helmholtz design was achieved. The Maxwell (sine)-Helmholtz (cosine) approach has proven successful for a horizontal phase gradient coil. A similar approach may be useful for other phase-gradient coil designs. Copyright © 2013 Elsevier Inc. All rights reserved.
Coding of time-dependent stimuli in homogeneous and heterogeneous neural populations.
Beiran, Manuel; Kruscha, Alexandra; Benda, Jan; Lindner, Benjamin
2018-04-01
We compare the information transmission of a time-dependent signal by two types of uncoupled neuron populations that differ in their sources of variability: i) a homogeneous population whose units receive independent noise and ii) a deterministic heterogeneous population, where each unit exhibits a different baseline firing rate ('disorder'). Our criterion for making both sources of variability quantitatively comparable is that the interspike-interval distributions are identical for both systems. Numerical simulations using leaky integrate-and-fire neurons unveil that a non-zero amount of both noise or disorder maximizes the encoding efficiency of the homogeneous and heterogeneous system, respectively, as a particular case of suprathreshold stochastic resonance. Our findings thus illustrate that heterogeneity can render similarly profitable effects for neuronal populations as dynamic noise. The optimal noise/disorder depends on the system size and the properties of the stimulus such as its intensity or cutoff frequency. We find that weak stimuli are better encoded by a noiseless heterogeneous population, whereas for strong stimuli a homogeneous population outperforms an equivalent heterogeneous system up to a moderate noise level. Furthermore, we derive analytical expressions of the coherence function for the cases of very strong noise and of vanishing intrinsic noise or heterogeneity, which predict the existence of an optimal noise intensity. Our results show that, depending on the type of signal, noise as well as heterogeneity can enhance the encoding performance of neuronal populations.
Construction of optimal resources for concatenated quantum protocols
NASA Astrophysics Data System (ADS)
Pirker, A.; Wallnöfer, J.; Briegel, H. J.; Dür, W.
2017-06-01
We consider the explicit construction of resource states for measurement-based quantum information processing. We concentrate on special-purpose resource states that are capable to perform a certain operation or task, where we consider unitary Clifford circuits as well as non-trace-preserving completely positive maps, more specifically probabilistic operations including Clifford operations and Pauli measurements. We concentrate on 1 →m and m →1 operations, i.e., operations that map one input qubit to m output qubits or vice versa. Examples of such operations include encoding and decoding in quantum error correction, entanglement purification, or entanglement swapping. We provide a general framework to construct optimal resource states for complex tasks that are combinations of these elementary building blocks. All resource states only contain input and output qubits, and are hence of minimal size. We obtain a stabilizer description of the resulting resource states, which we also translate into a circuit pattern to experimentally generate these states. In particular, we derive recurrence relations at the level of stabilizers as key analytical tool to generate explicit (graph) descriptions of families of resource states. This allows us to explicitly construct resource states for encoding, decoding, and syndrome readout for concatenated quantum error correction codes, code switchers, multiple rounds of entanglement purification, quantum repeaters, and combinations thereof (such as resource states for entanglement purification of encoded states).
Perceptual support promotes strategy generation: Evidence from equation solving.
Alibali, Martha W; Crooks, Noelle M; McNeil, Nicole M
2017-08-30
Over time, children shift from using less optimal strategies for solving mathematics problems to using better ones. But why do children generate new strategies? We argue that they do so when they begin to encode problems more accurately; therefore, we hypothesized that perceptual support for correct encoding would foster strategy generation. Fourth-grade students solved mathematical equivalence problems (e.g., 3 + 4 + 5 = 3 + __) in a pre-test. They were then randomly assigned to one of three perceptual support conditions or to a Control condition. Participants in all conditions completed three mathematical equivalence problems with feedback about correctness. Participants in the experimental conditions received perceptual support (i.e., highlighting in red ink) for accurately encoding the equal sign, the right side of the equation, or the numbers that could be added to obtain the correct solution. Following this intervention, participants completed a problem-solving post-test. Among participants who solved the problems incorrectly at pre-test, those who received perceptual support for correctly encoding the equal sign were more likely to generate new, correct strategies for solving the problems than were those who received feedback only. Thus, perceptual support for accurate encoding of a key problem feature promoted generation of new, correct strategies. Statement of Contribution What is already known on this subject? With age and experience, children shift to using more effective strategies for solving math problems. Problem encoding also improves with age and experience. What the present study adds? Support for encoding the equal sign led children to generate correct strategies for solving equations. Improvements in problem encoding are one source of new strategies. © 2017 The British Psychological Society.
Plant, Ewan P.; Rakauskaitė, Rasa; Taylor, Deborah R.; Dinman, Jonathan D.
2010-01-01
In retroviruses and the double-stranded RNA totiviruses, the efficiency of programmed −1 ribosomal frameshifting is critical for ensuring the proper ratios of upstream-encoded capsid proteins to downstream-encoded replicase enzymes. The genomic organizations of many other frameshifting viruses, including the coronaviruses, are very different, in that their upstream open reading frames encode nonstructural proteins, the frameshift-dependent downstream open reading frames encode enzymes involved in transcription and replication, and their structural proteins are encoded by subgenomic mRNAs. The biological significance of frameshifting efficiency and how the relative ratios of proteins encoded by the upstream and downstream open reading frames affect virus propagation has not been explored before. Here, three different strategies were employed to test the hypothesis that the −1 PRF signals of coronaviruses have evolved to produce the correct ratios of upstream- to downstream-encoded proteins. Specifically, infectious clones of the severe acute respiratory syndrome (SARS)-associated coronavirus harboring mutations that lower frameshift efficiency decreased infectivity by >4 orders of magnitude. Second, a series of frameshift-promoting mRNA pseudoknot mutants was employed to demonstrate that the frameshift signals of the SARS-associated coronavirus and mouse hepatitis virus have evolved to promote optimal frameshift efficiencies. Finally, we show that a previously described frameshift attenuator element does not actually affect frameshifting per se but rather serves to limit the fraction of ribosomes available for frameshifting. The findings of these analyses all support a “golden mean” model in which viruses use both programmed ribosomal frameshifting and translational attenuation to control the relative ratios of their encoded proteins. PMID:20164235
The Curious Case of Orthographic Distinctiveness: Disruption of Categorical Processing
ERIC Educational Resources Information Center
McDaniel, Mark A.; Cahill, Michael J.; Bugg, Julie M.
2016-01-01
How does orthographic distinctiveness affect recall of structured (categorized) word lists? On one theory, enhanced item-specific information (e.g., more distinct encoding) in concert with robust relational information (e.g., categorical information) optimally supports free recall. This predicts that for categorically structured lists,…
Generative Representations for Computer-Automated Evolutionary Design
NASA Technical Reports Server (NTRS)
Hornby, Gregory S.
2006-01-01
With the increasing computational power of computers, software design systems are progressing from being tools for architects and designers to express their ideas to tools capable of creating designs under human guidance. One of the main limitations for these computer-automated design systems is the representation with which they encode designs. If the representation cannot encode a certain design, then the design system cannot produce it. To be able to produce new types of designs, and not just optimize pre-defined parameterizations, evolutionary design systems must use generative representations. Generative representations are assembly procedures, or algorithms, for constructing a design thereby allowing for truly novel design solutions to be encoded. In addition, by enabling modularity, regularity and hierarchy, the level of sophistication that can be evolved is increased. We demonstrate the advantages of generative representations on two different design domains: the evolution of spacecraft antennas and the evolution of 3D objects.
Wavelet filtered shifted phase-encoded joint transform correlation for face recognition
NASA Astrophysics Data System (ADS)
Moniruzzaman, Md.; Alam, Mohammad S.
2017-05-01
A new wavelet-filtered-based Shifted- phase-encoded Joint Transform Correlation (WPJTC) technique has been proposed for efficient face recognition. The proposed technique uses discrete wavelet decomposition for preprocessing and can effectively accommodate various 3D facial distortions, effects of noise, and illumination variations. After analyzing different forms of wavelet basis functions, an optimal method has been proposed by considering the discrimination capability and processing speed as performance trade-offs. The proposed technique yields better correlation discrimination compared to alternate pattern recognition techniques such as phase-shifted phase-encoded fringe-adjusted joint transform correlator. The performance of the proposed WPJTC has been tested using the Yale facial database and extended Yale facial database under different environments such as illumination variation, noise, and 3D changes in facial expressions. Test results show that the proposed WPJTC yields better performance compared to alternate JTC based face recognition techniques.
Modeling Color Difference for Visualization Design.
Szafir, Danielle Albers
2018-01-01
Color is frequently used to encode values in visualizations. For color encodings to be effective, the mapping between colors and values must preserve important differences in the data. However, most guidelines for effective color choice in visualization are based on either color perceptions measured using large, uniform fields in optimal viewing environments or on qualitative intuitions. These limitations may cause data misinterpretation in visualizations, which frequently use small, elongated marks. Our goal is to develop quantitative metrics to help people use color more effectively in visualizations. We present a series of crowdsourced studies measuring color difference perceptions for three common mark types: points, bars, and lines. Our results indicate that peoples' abilities to perceive color differences varies significantly across mark types. Probabilistic models constructed from the resulting data can provide objective guidance for designers, allowing them to anticipate viewer perceptions in order to inform effective encoding design.
A genetically-encoded chloride and pH sensor for dissociating ion dynamics in the nervous system
Raimondo, Joseph V.; Joyce, Bradley; Kay, Louise; Schlagheck, Theresa; Newey, Sarah E.; Srinivas, Shankar; Akerman, Colin J.
2013-01-01
Within the nervous system, intracellular Cl− and pH regulate fundamental processes including cell proliferation, metabolism, synaptic transmission, and network excitability. Cl− and pH are often co-regulated, and network activity results in the movement of both Cl− and H+. Tools to accurately measure these ions are crucial for understanding their role under physiological and pathological conditions. Although genetically-encoded Cl− and pH sensors have been described previously, these either lack ion specificity or are unsuitable for neuronal use. Here we present ClopHensorN—a new genetically-encoded ratiometric Cl− and pH sensor that is optimized for the nervous system. We demonstrate the ability of ClopHensorN to dissociate and simultaneously quantify Cl− and H+ concentrations under a variety of conditions. In addition, we establish the sensor's utility by characterizing activity-dependent ion dynamics in hippocampal neurons. PMID:24312004
A genetically-encoded chloride and pH sensor for dissociating ion dynamics in the nervous system.
Raimondo, Joseph V; Joyce, Bradley; Kay, Louise; Schlagheck, Theresa; Newey, Sarah E; Srinivas, Shankar; Akerman, Colin J
2013-01-01
Within the nervous system, intracellular Cl(-) and pH regulate fundamental processes including cell proliferation, metabolism, synaptic transmission, and network excitability. Cl(-) and pH are often co-regulated, and network activity results in the movement of both Cl(-) and H(+). Tools to accurately measure these ions are crucial for understanding their role under physiological and pathological conditions. Although genetically-encoded Cl(-) and pH sensors have been described previously, these either lack ion specificity or are unsuitable for neuronal use. Here we present ClopHensorN-a new genetically-encoded ratiometric Cl(-) and pH sensor that is optimized for the nervous system. We demonstrate the ability of ClopHensorN to dissociate and simultaneously quantify Cl(-) and H(+) concentrations under a variety of conditions. In addition, we establish the sensor's utility by characterizing activity-dependent ion dynamics in hippocampal neurons.
Multi-year encoding of daily rainfall and streamflow via the fractal-multifractal method
NASA Astrophysics Data System (ADS)
Puente, C. E.; Maskey, M.; Sivakumar, B.
2017-12-01
A deterministic geometric approach, the fractal-multifractal (FM) method, which has been proven to be faithful in encoding daily geophysical sets over a year, is used to describe records over multiple years at a time. Looking for FM parameter trends over longer periods, the present study shows FM descriptions of daily rainfall and streamflow gathered over five consecutive years optimizing deviations on accumulated sets. The results for 100 and 60 sets of five years for rainfall streamflow, respectively, near Sacramento, California illustrate that: (a) encoding of both types of data sets may be accomplished with relatively small errors; and (b) predicting the geometry of both variables appears to be possible, even five years ahead, training neural networks on the respective FM parameters. It is emphasized that the FM approach not only captures the accumulated sets over successive pentades but also preserves other statistical attributes including the overall "texture" of the records.
Genetically Encoded Biosensors in Plants: Pathways to Discovery.
Walia, Ankit; Waadt, Rainer; Jones, Alexander M
2018-04-29
Genetically encoded biosensors that directly interact with a molecule of interest were first introduced more than 20 years ago with fusion proteins that served as fluorescent indicators for calcium ions. Since then, the technology has matured into a diverse array of biosensors that have been deployed to improve our spatiotemporal understanding of molecules whose dynamics have profound influence on plant physiology and development. In this review, we address several types of biosensors with a focus on genetically encoded calcium indicators, which are now the most diverse and advanced group of biosensors. We then consider the discoveries in plant biology made by using biosensors for calcium, pH, reactive oxygen species, redox conditions, primary metabolites, phytohormones, and nutrients. These discoveries were dependent on the engineering, characterization, and optimization required to develop a successful biosensor; they were also dependent on the methodological developments required to express, detect, and analyze the readout of such biosensors.
Overexpression and characterization of laccase from Trametes versicolor in Pichia pastoris.
Li, Q; Pei, J; Zhao, L; Xie, J; Cao, F; Wang, G
2014-01-01
A laccase-encoding gene of Trametes versicolor, lccA, was cloned and expressed in Pichia pastoris X33. The lccA gene consists ofa 1560 bp open reading frame encoding 519 amino acids, which was classified into family copper blue oxidase. To improve the expression level of recombinant laccase in P. pastoris, conditions of the fermentation were optimized by the single factor experiments. The optimal fermentation conditions for the laccase production in shake flask cultivation using BMGY medium were obtained: the optimal initial pH 7.0, the presence of 0.5 mM Cu2+, 0.6% methanol added into the culture every 24 h. The laccase activity was up to 11.972 U/L under optimal conditions after 16 days of induction in a medium with 4% peptone. After 100 h of large scale production in 5 L fermenter the enzyme activity reached 18.123 U/L. The recombinant laccase was purified by ultrafiltration and (NH4)2SO4 precipitation showing a single band on SDS-PAGE, which had a molecular mass of 58 kDa. The optimum pH and temperature for the laccase were pH 2.0 and 50 degrees C with 2,2'-azino-bis(3-ethylbenzothiazoline-6-sulfonic acid) (ABTS) as a substrate. The recombinant laccase was stable over a pH range of 2.0-7.0. The K(m) and the V(max) value of LccA were 0.43 mM and 82.3 U/mg for ABTS, respectively.
Source encoding in multi-parameter full waveform inversion
NASA Astrophysics Data System (ADS)
Matharu, Gian; Sacchi, Mauricio D.
2018-04-01
Source encoding techniques alleviate the computational burden of sequential-source full waveform inversion (FWI) by considering multiple sources simultaneously rather than independently. The reduced data volume requires fewer forward/adjoint simulations per non-linear iteration. Applications of source-encoded full waveform inversion (SEFWI) have thus far focused on monoparameter acoustic inversion. We extend SEFWI to the multi-parameter case with applications presented for elastic isotropic inversion. Estimating multiple parameters can be challenging as perturbations in different parameters can prompt similar responses in the data. We investigate the relationship between source encoding and parameter trade-off by examining the multi-parameter source-encoded Hessian. Probing of the Hessian demonstrates the convergence of the expected source-encoded Hessian, to that of conventional FWI. The convergence implies that the parameter trade-off in SEFWI is comparable to that observed in FWI. A series of synthetic inversions are conducted to establish the feasibility of source-encoded multi-parameter FWI. We demonstrate that SEFWI requires fewer overall simulations than FWI to achieve a target model error for a range of first-order optimization methods. An inversion for spatially inconsistent P - (α) and S-wave (β) velocity models, corroborates the expectation of comparable parameter trade-off in SEFWI and FWI. The final example demonstrates a shortcoming of SEFWI when confronted with time-windowing in data-driven inversion schemes. The limitation is a consequence of the implicit fixed-spread acquisition assumption in SEFWI. Alternative objective functions, namely the normalized cross-correlation and L1 waveform misfit, do not enable SEFWI to overcome this limitation.
Robust extrema features for time-series data analysis.
Vemulapalli, Pramod K; Monga, Vishal; Brennan, Sean N
2013-06-01
The extraction of robust features for comparing and analyzing time series is a fundamentally important problem. Research efforts in this area encompass dimensionality reduction using popular signal analysis tools such as the discrete Fourier and wavelet transforms, various distance metrics, and the extraction of interest points from time series. Recently, extrema features for analysis of time-series data have assumed increasing significance because of their natural robustness under a variety of practical distortions, their economy of representation, and their computational benefits. Invariably, the process of encoding extrema features is preceded by filtering of the time series with an intuitively motivated filter (e.g., for smoothing), and subsequent thresholding to identify robust extrema. We define the properties of robustness, uniqueness, and cardinality as a means to identify the design choices available in each step of the feature generation process. Unlike existing methods, which utilize filters "inspired" from either domain knowledge or intuition, we explicitly optimize the filter based on training time series to optimize robustness of the extracted extrema features. We demonstrate further that the underlying filter optimization problem reduces to an eigenvalue problem and has a tractable solution. An encoding technique that enhances control over cardinality and uniqueness is also presented. Experimental results obtained for the problem of time series subsequence matching establish the merits of the proposed algorithm.
Kamensek, Urska; Tesic, Natasa; Sersa, Gregor; Kos, Spela; Cemazar, Maja
2017-01-01
Electrotransfer mediated delivery of interleukin-12 (IL-12) gene, encoded on a plasmid vector, has already been demonstrated to have a potent antitumor efficacy and great potential for clinical application. In the present study, our aim was to construct an optimized IL-12-encoding plasmid that is safe from the regulatory point of view. In light of previous studies demonstrating that IL-12 should be released in a tumor localized manner for optimal efficacy, the strong ubiquitous promoter was replaced with a weak endogenous promoter of the collagen 2 gene, which is specific for fibroblasts. Next, to comply with increasing regulatory demands for clinically used plasmids, the expression cassette was cloned in a plasmid lacking the antibiotic resistance gene. The constructed fibroblast-specific and antibiotic-free IL-12 plasmid was demonstrated to support low IL-12 expression after gene electrotransfer in selected cell lines. Furthermore, the removal of antibiotic resistance did not affect the plasmid expression profile and lowered its cytotoxicity. With optimal IL-12 expression and minimal transgene non-specific effects, i.e., low cytotoxicity, the constructed plasmid could be especially valuable for different modern immunological approaches to achieve localized boosting of the host's immune system. Copyright © 2016 Elsevier Inc. All rights reserved.
Graeber, Kai; Linkies, Ada; Steinbrecher, Tina; Mummenhoff, Klaus; Tarkowská, Danuše; Turečková, Veronika; Ignatz, Michael; Sperber, Katja; Voegele, Antje; de Jong, Hans; Urbanová, Terezie; Strnad, Miroslav; Leubner-Metzger, Gerhard
2014-08-26
Seed germination is an important life-cycle transition because it determines subsequent plant survival and reproductive success. To detect optimal spatiotemporal conditions for germination, seeds act as sophisticated environmental sensors integrating information such as ambient temperature. Here we show that the delay of germination 1 (DOG1) gene, known for providing dormancy adaptation to distinct environments, determines the optimal temperature for seed germination. By reciprocal gene-swapping experiments between Brassicaceae species we show that the DOG1-mediated dormancy mechanism is conserved. Biomechanical analyses show that this mechanism regulates the material properties of the endosperm, a seed tissue layer acting as germination barrier to control coat dormancy. We found that DOG1 inhibits the expression of gibberellin (GA)-regulated genes encoding cell-wall remodeling proteins in a temperature-dependent manner. Furthermore we demonstrate that DOG1 causes temperature-dependent alterations in the seed GA metabolism. These alterations in hormone metabolism are brought about by the temperature-dependent differential expression of genes encoding key enzymes of the GA biosynthetic pathway. These effects of DOG1 lead to a temperature-dependent control of endosperm weakening and determine the optimal temperature for germination. The conserved DOG1-mediated coat-dormancy mechanism provides a highly adaptable temperature-sensing mechanism to control the timing of germination.
NASA Astrophysics Data System (ADS)
Izah Anuar, Nurul; Saptari, Adi
2016-02-01
This paper addresses the types of particle representation (encoding) procedures in a population-based stochastic optimization technique in solving scheduling problems known in the job-shop manufacturing environment. It intends to evaluate and compare the performance of different particle representation procedures in Particle Swarm Optimization (PSO) in the case of solving Job-shop Scheduling Problems (JSP). Particle representation procedures refer to the mapping between the particle position in PSO and the scheduling solution in JSP. It is an important step to be carried out so that each particle in PSO can represent a schedule in JSP. Three procedures such as Operation and Particle Position Sequence (OPPS), random keys representation and random-key encoding scheme are used in this study. These procedures have been tested on FT06 and FT10 benchmark problems available in the OR-Library, where the objective function is to minimize the makespan by the use of MATLAB software. Based on the experimental results, it is discovered that OPPS gives the best performance in solving both benchmark problems. The contribution of this paper is the fact that it demonstrates to the practitioners involved in complex scheduling problems that different particle representation procedures can have significant effects on the performance of PSO in solving JSP.
Fast MPEG-CDVS Encoder With GPU-CPU Hybrid Computing.
Duan, Ling-Yu; Sun, Wei; Zhang, Xinfeng; Wang, Shiqi; Chen, Jie; Yin, Jianxiong; See, Simon; Huang, Tiejun; Kot, Alex C; Gao, Wen
2018-05-01
The compact descriptors for visual search (CDVS) standard from ISO/IEC moving pictures experts group has succeeded in enabling the interoperability for efficient and effective image retrieval by standardizing the bitstream syntax of compact feature descriptors. However, the intensive computation of a CDVS encoder unfortunately hinders its widely deployment in industry for large-scale visual search. In this paper, we revisit the merits of low complexity design of CDVS core techniques and present a very fast CDVS encoder by leveraging the massive parallel execution resources of graphics processing unit (GPU). We elegantly shift the computation-intensive and parallel-friendly modules to the state-of-the-arts GPU platforms, in which the thread block allocation as well as the memory access mechanism are jointly optimized to eliminate performance loss. In addition, those operations with heavy data dependence are allocated to CPU for resolving the extra but non-necessary computation burden for GPU. Furthermore, we have demonstrated the proposed fast CDVS encoder can work well with those convolution neural network approaches which enables to leverage the advantages of GPU platforms harmoniously, and yield significant performance improvements. Comprehensive experimental results over benchmarks are evaluated, which has shown that the fast CDVS encoder using GPU-CPU hybrid computing is promising for scalable visual search.
Nocturnal Mnemonics: Sleep and Hippocampal Memory Processing
Saletin, Jared M.; Walker, Matthew P.
2012-01-01
As critical as waking brain function is to learning and memory, an established literature now describes an equally important yet complementary role for sleep in information processing. This overview examines the specific contribution of sleep to human hippocampal memory processing; both the detriments caused by a lack of sleep, and conversely, the proactive benefits that develop following the presence of sleep. First, a role for sleep before learning is discussed, preparing the hippocampus for initial memory encoding. Second, a role for sleep after learning is considered, modulating the post-encoding consolidation of hippocampal-dependent memory. Third, a model is outlined in which these encoding and consolidation operations are symbiotically accomplished, associated with specific NREM sleep physiological oscillations. As a result, the optimal network outcome is achieved: increasing hippocampal independence and hence overnight consolidation, while restoring next-day sparse hippocampal encoding capacity for renewed learning ability upon awakening. Finally, emerging evidence is considered suggesting that, unlike previous conceptions, sleep does not universally consolidate all information. Instead, and based on explicit as well as saliency cues during initial encoding, sleep executes the discriminatory offline consolidation only of select information. Consequently, sleep promotes the targeted strengthening of some memories while actively forgetting others; a proposal with significant theoretical and clinical ramifications. PMID:22557988
Engineering Genetically Encoded FRET Sensors
Lindenburg, Laurens; Merkx, Maarten
2014-01-01
Förster Resonance Energy Transfer (FRET) between two fluorescent proteins can be exploited to create fully genetically encoded and thus subcellularly targetable sensors. FRET sensors report changes in energy transfer between a donor and an acceptor fluorescent protein that occur when an attached sensor domain undergoes a change in conformation in response to ligand binding. The design of sensitive FRET sensors remains challenging as there are few generally applicable design rules and each sensor must be optimized anew. In this review we discuss various strategies that address this shortcoming, including rational design approaches that exploit self-associating fluorescent domains and the directed evolution of FRET sensors using high-throughput screening. PMID:24991940
Codon optimization underpins generalist parasitism in fungi
Badet, Thomas; Peyraud, Remi; Mbengue, Malick; Navaud, Olivier; Derbyshire, Mark; Oliver, Richard P; Barbacci, Adelin; Raffaele, Sylvain
2017-01-01
The range of hosts that parasites can infect is a key determinant of the emergence and spread of disease. Yet, the impact of host range variation on the evolution of parasite genomes remains unknown. Here, we show that codon optimization underlies genome adaptation in broad host range parasites. We found that the longer proteins encoded by broad host range fungi likely increase natural selection on codon optimization in these species. Accordingly, codon optimization correlates with host range across the fungal kingdom. At the species level, biased patterns of synonymous substitutions underpin increased codon optimization in a generalist but not a specialist fungal pathogen. Virulence genes were consistently enriched in highly codon-optimized genes of generalist but not specialist species. We conclude that codon optimization is related to the capacity of parasites to colonize multiple hosts. Our results link genome evolution and translational regulation to the long-term persistence of generalist parasitism. DOI: http://dx.doi.org/10.7554/eLife.22472.001 PMID:28157073
Trajectories for Locomotion Systems: A Geometric and Computational Approach via Series Expansions
2004-10-11
speed controller. The model is endowed with a 100 count per revolution optical encoder for odometry. (2) On-board computation is performed by a single...switching networks,” Automatica, July 2003. Submitted. [17] K. M. Passino, Biomimicry for Optimization, Control, and Automation. New York: Springer
Helle, Michael; Koken, Peter; Van Cauteren, Marc; van Osch, Matthias J. P.
2017-01-01
Purpose Both dynamic magnetic resonance angiography (4D‐MRA) and perfusion imaging can be acquired by using arterial spin labeling (ASL). While 4D‐MRA highlights large vessel pathology, such as stenosis or collateral blood flow patterns, perfusion imaging provides information on the microvascular status. Therefore, a complete picture of the cerebral hemodynamic condition could be obtained by combining the two techniques. Here, we propose a novel technique for simultaneous acquisition of 4D‐MRA and perfusion imaging using time‐encoded pseudo‐continuous arterial spin labeling. Methods The time‐encoded pseudo‐continuous arterial spin labeling module consisted of a first subbolus that was optimized for perfusion imaging by using a labeling duration of 1800 ms, whereas the other six subboli of 130 ms were used for encoding the passage of the labeled spins through the arterial system for 4D‐MRA acquisition. After the entire labeling module, a multishot 3D turbo‐field echo‐planar‐imaging readout was executed for the 4D‐MRA acquisition, immediately followed by a single‐shot, multislice echo‐planar‐imaging readout for perfusion imaging. The optimal excitation flip angle for the 3D turbo‐field echo‐planar‐imaging readout was investigated by evaluating the image quality of the 4D‐MRA and perfusion images as well as the accuracy of the estimated cerebral blood flow values. Results When using 36 excitation radiofrequency pulses with flip angles of 5 or 7.5°, the saturation effects of the 3D turbo‐field echo‐planar‐imaging readout on the perfusion images were relatively moderate and after correction, there were no statistically significant differences between the obtained cerebral blood flow values and those from traditional time‐encoded pseudo‐continuous arterial spin labeling. Conclusions This study demonstrated that simultaneous acquisition of 4D‐MRA and perfusion images can be achieved by using time‐encoded pseudo‐continuous arterial spin labeling. Magn Reson Med 79:2676–2684, 2018. © 2017 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution NonCommercial License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited and is not used for commercial purposes. PMID:28913838
Extracting TSK-type Neuro-Fuzzy model using the Hunting search algorithm
NASA Astrophysics Data System (ADS)
Bouzaida, Sana; Sakly, Anis; M'Sahli, Faouzi
2014-01-01
This paper proposes a Takagi-Sugeno-Kang (TSK) type Neuro-Fuzzy model tuned by a novel metaheuristic optimization algorithm called Hunting Search (HuS). The HuS algorithm is derived based on a model of group hunting of animals such as lions, wolves, and dolphins when looking for a prey. In this study, the structure and parameters of the fuzzy model are encoded into a particle. Thus, the optimal structure and parameters are achieved simultaneously. The proposed method was demonstrated through modeling and control problems, and the results have been compared with other optimization techniques. The comparisons indicate that the proposed method represents a powerful search approach and an effective optimization technique as it can extract the accurate TSK fuzzy model with an appropriate number of rules.
Rekoske, Brian T; Smith, Heath A; Olson, Brian M; Maricque, Brett B; McNeel, Douglas G
2015-08-01
DNA vaccines have demonstrated antitumor efficacy in multiple preclinical models, but low immunogenicity has been observed in several human clinical trials. This has led to many approaches seeking to improve the immunogenicity of DNA vaccines. We previously reported that a DNA vaccine encoding the cancer-testis antigen SSX2, modified to encode altered epitopes with increased MHC class I affinity, elicited a greater frequency of cytolytic, multifunctional CD8(+) T cells in non-tumor-bearing mice. We sought to test whether this optimized vaccine resulted in increased antitumor activity in mice bearing an HLA-A2-expressing tumor engineered to express SSX2. We found that immunization of tumor-bearing mice with the optimized vaccine elicited a surprisingly inferior antitumor effect relative to the native vaccine. Both native and optimized vaccines led to increased expression of PD-L1 on tumor cells, but antigen-specific CD8(+) T cells from mice immunized with the optimized construct expressed higher PD-1. Splenocytes from immunized animals induced PD-L1 expression on tumor cells in vitro. Antitumor activity of the optimized vaccine could be increased when combined with antibodies blocking PD-1 or PD-L1, or by targeting a tumor line not expressing PD-L1. These findings suggest that vaccines aimed at eliciting effector CD8(+) T cells, and DNA vaccines in particular, might best be combined with PD-1 pathway inhibitors in clinical trials. This strategy may be particularly advantageous for vaccines targeting prostate cancer, a disease for which antitumor vaccines have demonstrated clinical benefit and yet PD-1 pathway inhibitors alone have shown little efficacy to date. ©2015 American Association for Cancer Research.
Optimal port-based teleportation
NASA Astrophysics Data System (ADS)
Mozrzymas, Marek; Studziński, Michał; Strelchuk, Sergii; Horodecki, Michał
2018-05-01
Deterministic port-based teleportation (dPBT) protocol is a scheme where a quantum state is guaranteed to be transferred to another system without unitary correction. We characterise the best achievable performance of the dPBT when both the resource state and the measurement is optimised. Surprisingly, the best possible fidelity for an arbitrary number of ports and dimension of the teleported state is given by the largest eigenvalue of a particular matrix—Teleportation Matrix. It encodes the relationship between a certain set of Young diagrams and emerges as the optimal solution to the relevant semidefinite programme.
New approaches to optimization in aerospace conceptual design
NASA Technical Reports Server (NTRS)
Gage, Peter J.
1995-01-01
Aerospace design can be viewed as an optimization process, but conceptual studies are rarely performed using formal search algorithms. Three issues that restrict the success of automatic search are identified in this work. New approaches are introduced to address the integration of analyses and optimizers, to avoid the need for accurate gradient information and a smooth search space (required for calculus-based optimization), and to remove the restrictions imposed by fixed complexity problem formulations. (1) Optimization should be performed in a flexible environment. A quasi-procedural architecture is used to conveniently link analysis modules and automatically coordinate their execution. It efficiently controls a large-scale design tasks. (2) Genetic algorithms provide a search method for discontinuous or noisy domains. The utility of genetic optimization is demonstrated here, but parameter encodings and constraint-handling schemes must be carefully chosen to avoid premature convergence to suboptimal designs. The relationship between genetic and calculus-based methods is explored. (3) A variable-complexity genetic algorithm is created to permit flexible parameterization, so that the level of description can change during optimization. This new optimizer automatically discovers novel designs in structural and aerodynamic tasks.
DNA-Encoded Solid-Phase Synthesis: Encoding Language Design and Complex Oligomer Library Synthesis.
MacConnell, Andrew B; McEnaney, Patrick J; Cavett, Valerie J; Paegel, Brian M
2015-09-14
The promise of exploiting combinatorial synthesis for small molecule discovery remains unfulfilled due primarily to the "structure elucidation problem": the back-end mass spectrometric analysis that significantly restricts one-bead-one-compound (OBOC) library complexity. The very molecular features that confer binding potency and specificity, such as stereochemistry, regiochemistry, and scaffold rigidity, are conspicuously absent from most libraries because isomerism introduces mass redundancy and diverse scaffolds yield uninterpretable MS fragmentation. Here we present DNA-encoded solid-phase synthesis (DESPS), comprising parallel compound synthesis in organic solvent and aqueous enzymatic ligation of unprotected encoding dsDNA oligonucleotides. Computational encoding language design yielded 148 thermodynamically optimized sequences with Hamming string distance ≥ 3 and total read length <100 bases for facile sequencing. Ligation is efficient (70% yield), specific, and directional over 6 encoding positions. A series of isomers served as a testbed for DESPS's utility in split-and-pool diversification. Single-bead quantitative PCR detected 9 × 10(4) molecules/bead and sequencing allowed for elucidation of each compound's synthetic history. We applied DESPS to the combinatorial synthesis of a 75,645-member OBOC library containing scaffold, stereochemical and regiochemical diversity using mixed-scale resin (160-μm quality control beads and 10-μm screening beads). Tandem DNA sequencing/MALDI-TOF MS analysis of 19 quality control beads showed excellent agreement (<1 ppt) between DNA sequence-predicted mass and the observed mass. DESPS synergistically unites the advantages of solid-phase synthesis and DNA encoding, enabling single-bead structural elucidation of complex compounds and synthesis using reactions normally considered incompatible with unprotected DNA. The widespread availability of inexpensive oligonucleotide synthesis, enzymes, DNA sequencing, and PCR make implementation of DESPS straightforward, and may prompt the chemistry community to revisit the synthesis of more complex and diverse libraries.
An efficient procedure for the expression and purification of HIV-1 protease from inclusion bodies.
Nguyen, Hong-Loan Thi; Nguyen, Thuy Thi; Vu, Quy Thi; Le, Hang Thi; Pham, Yen; Trinh, Phuong Le; Bui, Thuan Phuong; Phan, Tuan-Nghia
2015-12-01
Several studies have focused on HIV-1 protease for developing drugs for treating AIDS. Recombinant HIV-1 protease is used to screen new drugs from synthetic compounds or natural substances. However, large-scale expression and purification of this enzyme is difficult mainly because of its low expression and solubility. In this study, we constructed 9 recombinant plasmids containing a sequence encoding HIV-1 protease along with different fusion tags and examined the expression of the enzyme from these plasmids. Of the 9 plasmids, pET32a(+) plasmid containing the HIV-1 protease-encoding sequence along with sequences encoding an autocleavage site GTVSFNF at the N-terminus and TEV plus 6× His tag at the C-terminus showed the highest expression of the enzyme and was selected for further analysis. The recombinant protein was isolated from inclusion bodies by using 2 tandem Q- and Ni-Sepharose columns. SDS-PAGE of the obtained HIV-1 protease produced a single band of approximately 13 kDa. The enzyme was recovered efficiently (4 mg protein/L of cell culture) and had high specific activity of 1190 nmol min(-1) mg(-1) at an optimal pH of 4.7 and optimal temperature of 37 °C. This procedure for expressing and purifying HIV-1 protease is now being scaled up to produce the enzyme on a large scale for its application. Copyright © 2015 Elsevier Inc. All rights reserved.
Robust quantum optimizer with full connectivity
Nigg, Simon E.; Lörch, Niels; Tiwari, Rakesh P.
2017-01-01
Quantum phenomena have the potential to speed up the solution of hard optimization problems. For example, quantum annealing, based on the quantum tunneling effect, has recently been shown to scale exponentially better with system size than classical simulated annealing. However, current realizations of quantum annealers with superconducting qubits face two major challenges. First, the connectivity between the qubits is limited, excluding many optimization problems from a direct implementation. Second, decoherence degrades the success probability of the optimization. We address both of these shortcomings and propose an architecture in which the qubits are robustly encoded in continuous variable degrees of freedom. By leveraging the phenomenon of flux quantization, all-to-all connectivity with sufficient tunability to implement many relevant optimization problems is obtained without overhead. Furthermore, we demonstrate the robustness of this architecture by simulating the optimal solution of a small instance of the nondeterministic polynomial-time hard (NP-hard) and fully connected number partitioning problem in the presence of dissipation. PMID:28435880
USDA-ARS?s Scientific Manuscript database
Bacterial flagella production is controlled by a multi-tiered regulatory system that coordinates expression of 40-50 subunits and correct assembly of these complicated structures. Flagellar expression is environmentally controlled, presumably to optimize the benefits and liabilities of flagellar ex...
The Sodium-Activated Potassium Channel Slack Is Required for Optimal Cognitive Flexibility in Mice
ERIC Educational Resources Information Center
Bausch, Anne E.; Dieter, Rebekka; Nann, Yvette; Hausmann, Mario; Meyerdierks, Nora; Kaczmarek, Leonard K.; Ruth, Peter; Lukowski, Robert
2015-01-01
"Kcnt1" encoded sodium-activated potassium channels (Slack channels) are highly expressed throughout the brain where they modulate the firing patterns and general excitability of many types of neurons. Increasing evidence suggests that Slack channels may be important for higher brain functions such as cognition and normal intellectual…
Chen, Ke; Feng, Yijun; Yang, Zhongjie; Cui, Li; Zhao, Junming; Zhu, Bo; Jiang, Tian
2016-10-24
Ultrathin metasurface compromising various sub-wavelength meta-particles offers promising advantages in controlling electromagnetic wave by spatially manipulating the wavefront characteristics across the interface. The recently proposed digital coding metasurface could even simplify the design and optimization procedures due to the digitalization of the meta-particle geometry. However, current attempts to implement the digital metasurface still utilize several structural meta-particles to obtain certain electromagnetic responses, and requiring time-consuming optimization especially in multi-bits coding designs. In this regard, we present herein utilizing geometric phase based single structured meta-particle with various orientations to achieve either 1-bit or multi-bits digital metasurface. Particular electromagnetic wave scattering patterns dependent on the incident polarizations can be tailored by the encoded metasurfaces with regular sequences. On the contrast, polarization insensitive diffusion-like scattering can also been successfully achieved by digital metasurface encoded with randomly distributed coding sequences leading to substantial suppression of backward scattering in a broadband microwave frequency. The proposed digital metasurfaces provide simple designs and reveal new opportunities for controlling electromagnetic wave scattering with or without polarization dependence.
Chen, Ke; Feng, Yijun; Yang, Zhongjie; Cui, Li; Zhao, Junming; Zhu, Bo; Jiang, Tian
2016-01-01
Ultrathin metasurface compromising various sub-wavelength meta-particles offers promising advantages in controlling electromagnetic wave by spatially manipulating the wavefront characteristics across the interface. The recently proposed digital coding metasurface could even simplify the design and optimization procedures due to the digitalization of the meta-particle geometry. However, current attempts to implement the digital metasurface still utilize several structural meta-particles to obtain certain electromagnetic responses, and requiring time-consuming optimization especially in multi-bits coding designs. In this regard, we present herein utilizing geometric phase based single structured meta-particle with various orientations to achieve either 1-bit or multi-bits digital metasurface. Particular electromagnetic wave scattering patterns dependent on the incident polarizations can be tailored by the encoded metasurfaces with regular sequences. On the contrast, polarization insensitive diffusion-like scattering can also been successfully achieved by digital metasurface encoded with randomly distributed coding sequences leading to substantial suppression of backward scattering in a broadband microwave frequency. The proposed digital metasurfaces provide simple designs and reveal new opportunities for controlling electromagnetic wave scattering with or without polarization dependence. PMID:27775064
Mild traumatic brain injury: graph-model characterization of brain networks for episodic memory.
Tsirka, Vasso; Simos, Panagiotis G; Vakis, Antonios; Kanatsouli, Kassiani; Vourkas, Michael; Erimaki, Sofia; Pachou, Ellie; Stam, Cornelis Jan; Micheloyannis, Sifis
2011-02-01
Episodic memory is among the cognitive functions that can be affected in the acute phase following mild traumatic brain injury (MTBI). The present study used EEG recordings to evaluate global synchronization and network organization of rhythmic activity during the encoding and recognition phases of an episodic memory task varying in stimulus type (kaleidoscope images, pictures, words, and pseudowords). Synchronization of oscillatory activity was assessed using a linear and nonlinear connectivity estimator and network analyses were performed using algorithms derived from graph theory. Twenty five MTBI patients (tested within days post-injury) and healthy volunteers were closely matched on demographic variables, verbal ability, psychological status variables, as well as on overall task performance. Patients demonstrated sub-optimal network organization, as reflected by changes in graph parameters in the theta and alpha bands during both encoding and recognition. There were no group differences in spectral energy during task performance or on network parameters during a control condition (rest). Evidence of less optimally organized functional networks during memory tasks was more prominent for pictorial than for verbal stimuli. Copyright © 2010 Elsevier B.V. All rights reserved.
Backwards compatible high dynamic range video compression
NASA Astrophysics Data System (ADS)
Dolzhenko, Vladimir; Chesnokov, Vyacheslav; Edirisinghe, Eran A.
2014-02-01
This paper presents a two layer CODEC architecture for high dynamic range video compression. The base layer contains the tone mapped video stream encoded with 8 bits per component which can be decoded using conventional equipment. The base layer content is optimized for rendering on low dynamic range displays. The enhancement layer contains the image difference, in perceptually uniform color space, between the result of inverse tone mapped base layer content and the original video stream. Prediction of the high dynamic range content reduces the redundancy in the transmitted data while still preserves highlights and out-of-gamut colors. Perceptually uniform colorspace enables using standard ratedistortion optimization algorithms. We present techniques for efficient implementation and encoding of non-uniform tone mapping operators with low overhead in terms of bitstream size and number of operations. The transform representation is based on human vision system model and suitable for global and local tone mapping operators. The compression techniques include predicting the transform parameters from previously decoded frames and from already decoded data for current frame. Different video compression techniques are compared: backwards compatible and non-backwards compatible using AVC and HEVC codecs.
Quality versus intelligibility: studying human preferences for American Sign Language video
NASA Astrophysics Data System (ADS)
Ciaramello, Frank M.; Hemami, Sheila S.
2011-03-01
Real-time videoconferencing using cellular devices provides natural communication to the Deaf community. For this application, compressed American Sign Language (ASL) video must be evaluated in terms of the intelligibility of the conversation and not in terms of the overall aesthetic quality of the video. This work presents a paired comparison experiment to determine the subjective preferences of ASL users in terms of the trade-off between intelligibility and quality when varying the proportion of the bitrate allocated explicitly to the regions of the video containing the signer. A rate-distortion optimization technique, which jointly optimizes a quality criteria and an intelligibility criteria according to a user-specified parameter, generates test video pairs for the subjective experiment. Experimental results suggest that at sufficiently high bitrates, all users prefer videos in which the non-signer regions in the video are encoded with some nominal rate. As the total encoding bitrate decreases, users generally prefer video in which a greater proportion of the rate is allocated to the signer. The specific operating points preferred in the quality-intelligibility trade-off vary with the demographics of the users.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boche, H., E-mail: boche@tum.de; Janßen, G., E-mail: gisbert.janssen@tum.de
We consider one-way quantum state merging and entanglement distillation under compound and arbitrarily varying source models. Regarding quantum compound sources, where the source is memoryless, but the source state an unknown member of a certain set of density matrices, we continue investigations begun in the work of Bjelaković et al. [“Universal quantum state merging,” J. Math. Phys. 54, 032204 (2013)] and determine the classical as well as entanglement cost of state merging. We further investigate quantum state merging and entanglement distillation protocols for arbitrarily varying quantum sources (AVQS). In the AVQS model, the source state is assumed to vary inmore » an arbitrary manner for each source output due to environmental fluctuations or adversarial manipulation. We determine the one-way entanglement distillation capacity for AVQS, where we invoke the famous robustification and elimination techniques introduced by Ahlswede. Regarding quantum state merging for AVQS we show by example that the robustification and elimination based approach generally leads to suboptimal entanglement as well as classical communication rates.« less
Robust Stabilization of Uncertain Systems Based on Energy Dissipation Concepts
NASA Technical Reports Server (NTRS)
Gupta, Sandeep
1996-01-01
Robust stability conditions obtained through generalization of the notion of energy dissipation in physical systems are discussed in this report. Linear time-invariant (LTI) systems which dissipate energy corresponding to quadratic power functions are characterized in the time-domain and the frequency-domain, in terms of linear matrix inequalities (LMls) and algebraic Riccati equations (ARE's). A novel characterization of strictly dissipative LTI systems is introduced in this report. Sufficient conditions in terms of dissipativity and strict dissipativity are presented for (1) stability of the feedback interconnection of dissipative LTI systems, (2) stability of dissipative LTI systems with memoryless feedback nonlinearities, and (3) quadratic stability of uncertain linear systems. It is demonstrated that the framework of dissipative LTI systems investigated in this report unifies and extends small gain, passivity, and sector conditions for stability. Techniques for selecting power functions for characterization of uncertain plants and robust controller synthesis based on these stability results are introduced. A spring-mass-damper example is used to illustrate the application of these methods for robust controller synthesis.
Fundamental limits on quantum dynamics based on entropy change
NASA Astrophysics Data System (ADS)
Das, Siddhartha; Khatri, Sumeet; Siopsis, George; Wilde, Mark M.
2018-01-01
It is well known in the realm of quantum mechanics and information theory that the entropy is non-decreasing for the class of unital physical processes. However, in general, the entropy does not exhibit monotonic behavior. This has restricted the use of entropy change in characterizing evolution processes. Recently, a lower bound on the entropy change was provided in the work of Buscemi, Das, and Wilde [Phys. Rev. A 93(6), 062314 (2016)]. We explore the limit that this bound places on the physical evolution of a quantum system and discuss how these limits can be used as witnesses to characterize quantum dynamics. In particular, we derive a lower limit on the rate of entropy change for memoryless quantum dynamics, and we argue that it provides a witness of non-unitality. This limit on the rate of entropy change leads to definitions of several witnesses for testing memory effects in quantum dynamics. Furthermore, from the aforementioned lower bound on entropy change, we obtain a measure of non-unitarity for unital evolutions.
Non-Markovian quantum processes: Complete framework and efficient characterization
NASA Astrophysics Data System (ADS)
Pollock, Felix A.; Rodríguez-Rosario, César; Frauenheim, Thomas; Paternostro, Mauro; Modi, Kavan
2018-01-01
Currently, there is no systematic way to describe a quantum process with memory solely in terms of experimentally accessible quantities. However, recent technological advances mean we have control over systems at scales where memory effects are non-negligible. The lack of such an operational description has hindered advances in understanding physical, chemical, and biological processes, where often unjustified theoretical assumptions are made to render a dynamical description tractable. This has led to theories plagued with unphysical results and no consensus on what a quantum Markov (memoryless) process is. Here, we develop a universal framework to characterize arbitrary non-Markovian quantum processes. We show how a multitime non-Markovian process can be reconstructed experimentally, and that it has a natural representation as a many-body quantum state, where temporal correlations are mapped to spatial ones. Moreover, this state is expected to have an efficient matrix-product-operator form in many cases. Our framework constitutes a systematic tool for the effective description of memory-bearing open-system evolutions.
Memory in random bouncing ball dynamics
NASA Astrophysics Data System (ADS)
Zouabi, C.; Scheibert, J.; Perret-Liaudet, J.
2016-09-01
The bouncing of an inelastic ball on a vibrating plate is a popular model used in various fields, from granular gases to nanometer-sized mechanical contacts. For random plate motion, so far, the model has been studied using Poincaré maps in which the excitation by the plate at successive bounces is assumed to be a discrete Markovian (memoryless) process. Here, we investigate numerically the behaviour of the model for continuous random excitations with tunable correlation time. We show that the system dynamics are controlled by the ratio of the Markovian mean flight time of the ball and the mean time between successive peaks in the motion of the exciting plate. When this ratio, which depends on the bandwidth of the excitation signal, exceeds a certain value, the Markovian approach is appropriate; below, memory of preceding excitations arises, leading to a significant decrease of the jump duration; at the smallest values of the ratio, chattering occurs. Overall, our results open the way for uses of the model in the low-excitation regime, which is still poorly understood.
Analysis of automatic repeat request methods for deep-space downlinks
NASA Technical Reports Server (NTRS)
Pollara, F.; Ekroot, L.
1995-01-01
Automatic repeat request (ARQ) methods cannot increase the capacity of a memoryless channel. However, they can be used to decrease the complexity of the channel-coding system to achieve essentially error-free transmission and to reduce link margins when the channel characteristics are poorly predictable. This article considers ARQ methods on a power-limited channel (e.g., the deep-space channel), where it is important to minimize the total power needed to transmit the data, as opposed to a bandwidth-limited channel (e.g., terrestrial data links), where the spectral efficiency or the total required transmission time is the most relevant performance measure. In the analysis, we compare the performance of three reference concatenated coded systems used in actual deep-space missions to that obtainable by ARQ methods using the same codes, in terms of required power, time to transmit with a given number of retransmissions, and achievable probability of word error. The ultimate limits of ARQ with an arbitrary number of retransmissions are also derived.
Contact enhancement of locomotion in spreading cell colonies
NASA Astrophysics Data System (ADS)
D'Alessandro, Joseph; Solon, Alexandre P.; Hayakawa, Yoshinori; Anjard, Christophe; Detcheverry, François; Rieu, Jean-Paul; Rivière, Charlotte
2017-10-01
The dispersal of cells from an initially constrained location is a crucial aspect of many physiological phenomena, ranging from morphogenesis to tumour spreading. In such processes, cell-cell interactions may deeply alter the motion of single cells, and in turn the collective dynamics. While contact phenomena like contact inhibition of locomotion are known to come into play at high densities, here we focus on the little explored case of non-cohesive cells at moderate densities. We fully characterize the spreading of micropatterned colonies of Dictyostelium discoideum cells from the complete set of individual trajectories. From data analysis and simulation of an elementary model, we demonstrate that contact interactions act to speed up the early population spreading by promoting individual cells to a state of higher persistence, which constitutes an as-yet unreported contact enhancement of locomotion. Our findings also suggest that the current modelling paradigm of memoryless active particles may need to be extended to account for the history-dependent internal state of motile cells.
Quantifying memory in complex physiological time-series.
Shirazi, Amir H; Raoufy, Mohammad R; Ebadi, Haleh; De Rui, Michele; Schiff, Sami; Mazloom, Roham; Hajizadeh, Sohrab; Gharibzadeh, Shahriar; Dehpour, Ahmad R; Amodio, Piero; Jafari, G Reza; Montagnese, Sara; Mani, Ali R
2013-01-01
In a time-series, memory is a statistical feature that lasts for a period of time and distinguishes the time-series from a random, or memory-less, process. In the present study, the concept of "memory length" was used to define the time period, or scale over which rare events within a physiological time-series do not appear randomly. The method is based on inverse statistical analysis and provides empiric evidence that rare fluctuations in cardio-respiratory time-series are 'forgotten' quickly in healthy subjects while the memory for such events is significantly prolonged in pathological conditions such as asthma (respiratory time-series) and liver cirrhosis (heart-beat time-series). The memory length was significantly higher in patients with uncontrolled asthma compared to healthy volunteers. Likewise, it was significantly higher in patients with decompensated cirrhosis compared to those with compensated cirrhosis and healthy volunteers. We also observed that the cardio-respiratory system has simple low order dynamics and short memory around its average, and high order dynamics around rare fluctuations.
NASA Astrophysics Data System (ADS)
Burel, Maxym; Martin, Sylvain; Bonnefoy, Olivier
2017-06-01
We present the results of an experimental study on the jamming/flowing transition. A suspension of neutrally buoyant large particles flows in an horizontal rectangular duct, where an artificial restriction triggers jamming. We show that the avalanche distribution size is exponential, that is memoryless. We further demonstrate that the avalanche size diverges when the restriction size approaches a critical value and that this divergence is well described by a power law. The parameters (critical opening size and divergence velocity) are compared to literature values and show a strong similarity with others systems. Another result of this paper is the study of the influence of the particle morphology. We show that, for a moderate restriction size, the dead-zone formed right upstream of the restriction is larger for angular particles but, paradoxically, that the avalanche size is larger for polyhedra compared to spheres by at least one order of magnitude.
Quantifying Memory in Complex Physiological Time-Series
Shirazi, Amir H.; Raoufy, Mohammad R.; Ebadi, Haleh; De Rui, Michele; Schiff, Sami; Mazloom, Roham; Hajizadeh, Sohrab; Gharibzadeh, Shahriar; Dehpour, Ahmad R.; Amodio, Piero; Jafari, G. Reza; Montagnese, Sara; Mani, Ali R.
2013-01-01
In a time-series, memory is a statistical feature that lasts for a period of time and distinguishes the time-series from a random, or memory-less, process. In the present study, the concept of “memory length” was used to define the time period, or scale over which rare events within a physiological time-series do not appear randomly. The method is based on inverse statistical analysis and provides empiric evidence that rare fluctuations in cardio-respiratory time-series are ‘forgotten’ quickly in healthy subjects while the memory for such events is significantly prolonged in pathological conditions such as asthma (respiratory time-series) and liver cirrhosis (heart-beat time-series). The memory length was significantly higher in patients with uncontrolled asthma compared to healthy volunteers. Likewise, it was significantly higher in patients with decompensated cirrhosis compared to those with compensated cirrhosis and healthy volunteers. We also observed that the cardio-respiratory system has simple low order dynamics and short memory around its average, and high order dynamics around rare fluctuations. PMID:24039811
An investigation of error characteristics and coding performance
NASA Technical Reports Server (NTRS)
Ebel, William J.; Ingels, Frank M.
1992-01-01
The performance of forward error correcting coding schemes on errors anticipated for the Earth Observation System (EOS) Ku-band downlink are studied. The EOS transmits picture frame data to the ground via the Telemetry Data Relay Satellite System (TDRSS) to a ground-based receiver at White Sands. Due to unintentional RF interference from other systems operating in the Ku band, the noise at the receiver is non-Gaussian which may result in non-random errors output by the demodulator. That is, the downlink channel cannot be modeled by a simple memoryless Gaussian-noise channel. From previous experience, it is believed that those errors are bursty. The research proceeded by developing a computer based simulation, called Communication Link Error ANalysis (CLEAN), to model the downlink errors, forward error correcting schemes, and interleavers used with TDRSS. To date, the bulk of CLEAN was written, documented, debugged, and verified. The procedures for utilizing CLEAN to investigate code performance were established and are discussed.
An adaptive DPCM algorithm for predicting contours in NTSC composite video signals
NASA Astrophysics Data System (ADS)
Cox, N. R.
An adaptive DPCM algorithm is proposed for encoding digitized National Television Systems Committee (NTSC) color video signals. This algorithm essentially predicts picture contours in the composite signal without resorting to component separation. The contour parameters (slope thresholds) are optimized using four 'typical' television frames that have been sampled at three times the color subcarrier frequency. Three variations of the basic predictor are simulated and compared quantitatively with three non-adaptive predictors of similar complexity. By incorporating a dual-word-length coder and buffer memory, high quality color pictures can be encoded at 4.0 bits/pel or 42.95 Mbit/s. The effect of channel error propagation is also investigated.
Optimal space communications techniques. [discussion of video signals and delta modulation
NASA Technical Reports Server (NTRS)
Schilling, D. L.
1974-01-01
The encoding of video signals using the Song Adaptive Delta Modulator (Song ADM) is discussed. The video signals are characterized as a sequence of pulses having arbitrary height and width. Although the ADM is suited to tracking signals having fast rise times, it was found that the DM algorithm (which permits an exponential rise for estimating an input step) results in a large overshoot and an underdamped response to the step. An overshoot suppression algorithm which significantly reduces the ringing while not affecting the rise time is presented along with formuli for the rise time and the settling time. Channel errors and their effect on the DM encoded bit stream were investigated.
Enhancing micrographs obtained with a scanning acoustic microscope using false-color encoding
NASA Astrophysics Data System (ADS)
Hammer, R.; Hollis, R. L.
1982-04-01
The periodic signal variations observed in reflection acoustic microscopy when lens-to-sample spacing is changed lead to reversals in image contrast. This contrast mechanism can be described by a V(Z) function, where V is the transducer voltage and Z the lens-to-sample spacing. In this work we show how by obtaining V(Z) curves from each plane of a complex sample, judicious choices of focal positions can be made to optimize signals from planes of interest, which allows color encoding of the image from each plane in an overlay image. We present false-color micrographs obtained in this way, along with A scans and V(Z) curves to demonstrate the technique.
Gu, Yang; Deng, Jieying; Liu, Yanfeng; Li, Jianghua; Shin, Hyun-Dong; Du, Guocheng; Chen, Jian; Liu, Long
2017-10-01
N-acetylglucosamine (GlcNAc) is an important amino sugar extensively used in the healthcare field. In a previous study, the recombinant Bacillus subtilis strain BSGN6-P xylA -glmS-pP43NMK-GNA1 (BN0-GNA1) had been constructed for microbial production of GlcNAc by pathway design and modular optimization. Here, the production of GlcNAc is further improved by rewiring both the glucose transportation and central metabolic pathways. First, the phosphotransferase system (PTS) is blocked by deletion of three genes, yyzE (encoding the PTS system transporter subunit IIA YyzE), ypqE (encoding the PTS system transporter subunit IIA YpqE), and ptsG (encoding the PTS system glucose-specific EIICBA component), resulting in 47.6% increase in the GlcNAc titer (from 6.5 ± 0.25 to 9.6 ± 0.16 g L -1 ) in shake flasks. Then, reinforcement of the expression of the glcP and glcK genes and optimization of glucose facilitator proteins are performed to promote glucose import and phosphorylation. Next, the competitive pathways for GlcNAc synthesis, namely glycolysis, peptidoglycan synthesis pathway, pentose phosphate pathway, and tricarboxylic acid cycle, are repressed by initiation codon-optimization strategies, and the GlcNAc titer in shake flasks is improved from 10.8 ± 0.25 to 13.2 ± 0.31 g L -1 . Finally, the GlcNAc titer is further increased to 42.1 ± 1.1 g L -1 in a 3-L fed-batch bioreactor, which is 1.72-fold that of the original strain, BN0-GNA1. This study shows considerably enhanced GlcNAc production, and the metabolic engineering strategy described here will be useful for engineering other prokaryotic microorganisms for the production of GlcNAc and related molecules. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Robust information propagation through noisy neural circuits
Pouget, Alexandre
2017-01-01
Sensory neurons give highly variable responses to stimulation, which can limit the amount of stimulus information available to downstream circuits. Much work has investigated the factors that affect the amount of information encoded in these population responses, leading to insights about the role of covariability among neurons, tuning curve shape, etc. However, the informativeness of neural responses is not the only relevant feature of population codes; of potentially equal importance is how robustly that information propagates to downstream structures. For instance, to quantify the retina’s performance, one must consider not only the informativeness of the optic nerve responses, but also the amount of information that survives the spike-generating nonlinearity and noise corruption in the next stage of processing, the lateral geniculate nucleus. Our study identifies the set of covariance structures for the upstream cells that optimize the ability of information to propagate through noisy, nonlinear circuits. Within this optimal family are covariances with “differential correlations”, which are known to reduce the information encoded in neural population activities. Thus, covariance structures that maximize information in neural population codes, and those that maximize the ability of this information to propagate, can be very different. Moreover, redundancy is neither necessary nor sufficient to make population codes robust against corruption by noise: redundant codes can be very fragile, and synergistic codes can—in some cases—optimize robustness against noise. PMID:28419098
Graeber, Kai; Linkies, Ada; Steinbrecher, Tina; Mummenhoff, Klaus; Tarkowská, Danuše; Turečková, Veronika; Ignatz, Michael; Sperber, Katja; Voegele, Antje; de Jong, Hans; Urbanová, Terezie; Strnad, Miroslav; Leubner-Metzger, Gerhard
2014-01-01
Seed germination is an important life-cycle transition because it determines subsequent plant survival and reproductive success. To detect optimal spatiotemporal conditions for germination, seeds act as sophisticated environmental sensors integrating information such as ambient temperature. Here we show that the DELAY OF GERMINATION 1 (DOG1) gene, known for providing dormancy adaptation to distinct environments, determines the optimal temperature for seed germination. By reciprocal gene-swapping experiments between Brassicaceae species we show that the DOG1-mediated dormancy mechanism is conserved. Biomechanical analyses show that this mechanism regulates the material properties of the endosperm, a seed tissue layer acting as germination barrier to control coat dormancy. We found that DOG1 inhibits the expression of gibberellin (GA)-regulated genes encoding cell-wall remodeling proteins in a temperature-dependent manner. Furthermore we demonstrate that DOG1 causes temperature-dependent alterations in the seed GA metabolism. These alterations in hormone metabolism are brought about by the temperature-dependent differential expression of genes encoding key enzymes of the GA biosynthetic pathway. These effects of DOG1 lead to a temperature-dependent control of endosperm weakening and determine the optimal temperature for germination. The conserved DOG1-mediated coat-dormancy mechanism provides a highly adaptable temperature-sensing mechanism to control the timing of germination. PMID:25114251
On The Influence Of Vector Design On Antibody Phage Display
Soltes, Glenn; Hust, Michael; Ng, Kitty K.Y.; Bansal, Aasthaa; Field, Johnathan; Stewart, Donald I.H.; Dübel, Stefan; Cha, Sanghoon; Wiersma, Erik J
2007-01-01
Phage display technology is an established technology particularly useful for the generation of monoclonal antibodies (mAbs). The isolation of phagemid-encoded mAb fragments depends on several features of a phage preparation. The aims of this study were to optimize phage display vectors, and to ascertain if different virion features can be optimized independently of each other. Comparisons were made between phagemid virions assembled by g3p-deficient helper phage, Hyperphage, Ex-phage or Phaberge, or corresponding g3p-sufficient helper phage, M13K07. All g3p-deficient helper phage provided a similar level of antibody display, significantly higher than that of M13K07. Hyperphage packaged virions at least 100-fold more efficiently than did Ex-phage or Phaberge. Phaberge's packaging efficiency improved by using a SupE strain. Different phagemids were also compared. Removal of a 56 base pair fragment from the promoter region resulted in increased display level and increased virion production. This critical fragment encodes a lacZ'-like peptide and is also present in other commonly used phagemids. Increasing display level did not show statistical correlation with phage production, phage infectivity or bacterial growth rate. However, phage production was positively correlated to phage infectivity. In summary, this study demonstrates simultaneously optimization of multiple and independent features of importance for phage selection. PMID:16996161
On the influence of vector design on antibody phage display.
Soltes, Glenn; Hust, Michael; Ng, Kitty K Y; Bansal, Aasthaa; Field, Johnathan; Stewart, Donald I H; Dübel, Stefan; Cha, Sanghoon; Wiersma, Erik J
2007-01-20
Phage display technology is an established technology particularly useful for the generation of monoclonal antibodies (mAbs). The isolation of phagemid-encoded mAb fragments depends on several features of a phage preparation. The aims of this study were to optimize phage display vectors, and to ascertain if different virion features can be optimized independently of each other. Comparisons were made between phagemid virions assembled by g3p-deficient helper phage, Hyperphage, Ex-phage or Phaberge, or corresponding g3p-sufficient helper phage, M13K07. All g3p-deficient helper phage provided a similar level of antibody display, significantly higher than that of M13K07. Hyperphage packaged virions at least 100-fold more efficiently than did Ex-phage or Phaberge. Phaberge's packaging efficiency improved by using a SupE strain. Different phagemids were also compared. Removal of a 56 base pair fragment from the promoter region resulted in increased display level and increased virion production. This critical fragment encodes a lacZ'-like peptide and is also present in other commonly used phagemids. Increasing display level did not show statistical correlation with phage production, phage infectivity or bacterial growth rate. However, phage production was positively correlated to phage infectivity. In summary, this study demonstrates simultaneously optimization of multiple and independent features of importance for phage selection.
JPEG 2000 Encoding with Perceptual Distortion Control
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Liu, Zhen; Karam, Lina J.
2008-01-01
An alternative approach has been devised for encoding image data in compliance with JPEG 2000, the most recent still-image data-compression standard of the Joint Photographic Experts Group. Heretofore, JPEG 2000 encoding has been implemented by several related schemes classified as rate-based distortion-minimization encoding. In each of these schemes, the end user specifies a desired bit rate and the encoding algorithm strives to attain that rate while minimizing a mean squared error (MSE). While rate-based distortion minimization is appropriate for transmitting data over a limited-bandwidth channel, it is not the best approach for applications in which the perceptual quality of reconstructed images is a major consideration. A better approach for such applications is the present alternative one, denoted perceptual distortion control, in which the encoding algorithm strives to compress data to the lowest bit rate that yields at least a specified level of perceptual image quality. Some additional background information on JPEG 2000 is prerequisite to a meaningful summary of JPEG encoding with perceptual distortion control. The JPEG 2000 encoding process includes two subprocesses known as tier-1 and tier-2 coding. In order to minimize the MSE for the desired bit rate, a rate-distortion- optimization subprocess is introduced between the tier-1 and tier-2 subprocesses. In tier-1 coding, each coding block is independently bit-plane coded from the most-significant-bit (MSB) plane to the least-significant-bit (LSB) plane, using three coding passes (except for the MSB plane, which is coded using only one "clean up" coding pass). For M bit planes, this subprocess involves a total number of (3M - 2) coding passes. An embedded bit stream is then generated for each coding block. Information on the reduction in distortion and the increase in the bit rate associated with each coding pass is collected. This information is then used in a rate-control procedure to determine the contribution of each coding block to the output compressed bit stream.
Difficulty of distinguishing product states locally
NASA Astrophysics Data System (ADS)
Croke, Sarah; Barnett, Stephen M.
2017-01-01
Nonlocality without entanglement is a rather counterintuitive phenomenon in which information may be encoded entirely in product (unentangled) states of composite quantum systems in such a way that local measurement of the subsystems is not enough for optimal decoding. For simple examples of pure product states, the gap in performance is known to be rather small when arbitrary local strategies are allowed. Here we restrict to local strategies readily achievable with current technology: those requiring neither a quantum memory nor joint operations. We show that even for measurements on pure product states, there can be a large gap between such strategies and theoretically optimal performance. Thus, even in the absence of entanglement, physically realizable local strategies can be far from optimal for extracting quantum information.
Motion-Compensated Compression of Dynamic Voxelized Point Clouds.
De Queiroz, Ricardo L; Chou, Philip A
2017-05-24
Dynamic point clouds are a potential new frontier in visual communication systems. A few articles have addressed the compression of point clouds, but very few references exist on exploring temporal redundancies. This paper presents a novel motion-compensated approach to encoding dynamic voxelized point clouds at low bit rates. A simple coder breaks the voxelized point cloud at each frame into blocks of voxels. Each block is either encoded in intra-frame mode or is replaced by a motion-compensated version of a block in the previous frame. The decision is optimized in a rate-distortion sense. In this way, both the geometry and the color are encoded with distortion, allowing for reduced bit-rates. In-loop filtering is employed to minimize compression artifacts caused by distortion in the geometry information. Simulations reveal that this simple motion compensated coder can efficiently extend the compression range of dynamic voxelized point clouds to rates below what intra-frame coding alone can accommodate, trading rate for geometry accuracy.
Optimizing the learning rate for adaptive estimation of neural encoding models
2018-01-01
Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains. PMID:29813069
Optimizing the learning rate for adaptive estimation of neural encoding models.
Hsieh, Han-Lin; Shanechi, Maryam M
2018-05-01
Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains.
USDA-ARS?s Scientific Manuscript database
Campylobacter jejuni is a leading cause of bacterial diarrheal disease throughout the world and a frequent commensal in the intestinal tract of poultry and many other animals. For maintaining optimal growth and ability to colonize various hosts, C. jejuni depends upon two-component regulatory system...
An opinion formation based binary optimization approach for feature selection
NASA Astrophysics Data System (ADS)
Hamedmoghadam, Homayoun; Jalili, Mahdi; Yu, Xinghuo
2018-02-01
This paper proposed a novel optimization method based on opinion formation in complex network systems. The proposed optimization technique mimics human-human interaction mechanism based on a mathematical model derived from social sciences. Our method encodes a subset of selected features to the opinion of an artificial agent and simulates the opinion formation process among a population of agents to solve the feature selection problem. The agents interact using an underlying interaction network structure and get into consensus in their opinions, while finding better solutions to the problem. A number of mechanisms are employed to avoid getting trapped in local minima. We compare the performance of the proposed method with a number of classical population-based optimization methods and a state-of-the-art opinion formation based method. Our experiments on a number of high dimensional datasets reveal outperformance of the proposed algorithm over others.
Experimental evaluation of fingerprint verification system based on double random phase encoding
NASA Astrophysics Data System (ADS)
Suzuki, Hiroyuki; Yamaguchi, Masahiro; Yachida, Masuyoshi; Ohyama, Nagaaki; Tashima, Hideaki; Obi, Takashi
2006-03-01
We proposed a smart card holder authentication system that combines fingerprint verification with PIN verification by applying a double random phase encoding scheme. In this system, the probability of accurate verification of an authorized individual reduces when the fingerprint is shifted significantly. In this paper, a review of the proposed system is presented and preprocessing for improving the false rejection rate is proposed. In the proposed method, the position difference between two fingerprint images is estimated by using an optimized template for core detection. When the estimated difference exceeds the permissible level, the user inputs the fingerprint again. The effectiveness of the proposed method is confirmed by a computational experiment; its results show that the false rejection rate is improved.
Integrated-optics heralded controlled-NOT gate for polarization-encoded qubits
NASA Astrophysics Data System (ADS)
Zeuner, Jonas; Sharma, Aditya N.; Tillmann, Max; Heilmann, René; Gräfe, Markus; Moqanaki, Amir; Szameit, Alexander; Walther, Philip
2018-03-01
Recent progress in integrated-optics technology has made photonics a promising platform for quantum networks and quantum computation protocols. Integrated optical circuits are characterized by small device footprints and unrivalled intrinsic interferometric stability. Here, we take advantage of femtosecond-laser-written waveguides' ability to process polarization-encoded qubits and present an implementation of a heralded controlled-NOT gate on chip. We evaluate the gate performance in the computational basis and a superposition basis, showing that the gate can create polarization entanglement between two photons. Transmission through the integrated device is optimized using thermally expanded core fibers and adiabatically reduced mode-field diameters at the waveguide facets. This demonstration underlines the feasibility of integrated quantum gates for all-optical quantum networks and quantum repeaters.
Encoding and Decoding of Multi-Channel ICMS in Macaque Somatosensory Cortex.
Dadarlat, Maria C; Sabes, Philip N
2016-01-01
Naturalistic control of brain-machine interfaces will require artificial proprioception, potentially delivered via intracortical microstimulation (ICMS). We have previously shown that multi-channel ICMS can guide a monkey reaching to unseen targets in a planar workspace. Here, we expand on that work, asking how ICMS is decoded into target angle and distance by analyzing the performance of a monkey when ICMS feedback was degraded. From the resulting pattern of errors, we found that the animal's estimate of target direction was consistent with a weighted circular-mean strategy-close to the optimal decoding strategy given the ICMS encoding. These results support our previous finding that animals can learn to use this artificial sensory feedback in an efficient and naturalistic manner.
Effectiveness of self-generated cues in early Alzheimer's disease.
Lipinska, B; Bäckman, L; Mäntylä, T; Viitanen, M
1994-12-01
The ability to utilize cognitive support in the form of self-generated cues in mild Alzheimer's disease (AD), and the factors promoting efficient cue utilization in this group of patients, were examined in two experiments on memory for words. Results from both experiments showed that normal old adults as well as AD patients performed better with self-generated cues than with experimenter-provided cues, although the latter type of cues resulted in gains relative to free recall. The findings indicate no qualitative differences in patterns of performance between the normal old and the AD patients. For both groups of subjects, cue effectiveness was optimized when (a) there was self-generation activity at encoding, and (b) encoding and retrieval conditions were compatible.
On Adapting the Tensor Voting Framework to Robust Color Image Denoising
NASA Astrophysics Data System (ADS)
Moreno, Rodrigo; Garcia, Miguel Angel; Puig, Domenec; Julià, Carme
This paper presents an adaptation of the tensor voting framework for color image denoising, while preserving edges. Tensors are used in order to encode the CIELAB color channels, the uniformity and the edginess of image pixels. A specific voting process is proposed in order to propagate color from a pixel to its neighbors by considering the distance between pixels, the perceptual color difference (by using an optimized version of CIEDE2000), a uniformity measurement and the likelihood of the pixels being impulse noise. The original colors are corrected with those encoded by the tensors obtained after the voting process. Peak to noise ratios and visual inspection show that the proposed methodology has a better performance than state-of-the-art techniques.
NASA Astrophysics Data System (ADS)
Doblas, Ana; Dutta, Ananya; Saavedra, Genaro; Preza, Chrysanthe
2018-02-01
Previously, a wavefront encoded (WFE) imaging system implemented using a squared cubic (SQUBIC) phase mask has been verified to reduce the sensitivity of the imaging system to spherical aberration (SA). The strength of the SQUBIC phase mask and, as consequence, the performance of the WFE system are controlled by a design parameter, A. Although the higher the A-value, the more tolerant the WFE system is to SA, this is accomplished at the expense of the effective imaging resolution. In this contribution, we investigate this tradeoff in order to find an optimal A-value to balance the effect of SA and loss of resolution.
Experimental study on discretely modulated continuous-variable quantum key distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen Yong; Zou Hongxin; Chen Pingxing
2010-08-15
We present a discretely modulated continuous-variable quantum key distribution system in free space by using strong coherent states. The amplitude noise in the laser source is suppressed to the shot-noise limit by using a mode cleaner combined with a frequency shift technique. Also, it is proven that the phase noise in the source has no impact on the final secret key rate. In order to increase the encoding rate, we use broadband homodyne detectors and the no-switching protocol. In a realistic model, we establish a secret key rate of 46.8 kbits/s against collective attacks at an encoding rate of 10more » MHz for a 90% channel loss when the modulation variance is optimal.« less
NASA Astrophysics Data System (ADS)
Malone, Joseph D.; El-Haddad, Mohamed T.; Leeburg, Kelsey C.; Terrones, Benjamin D.; Tao, Yuankai K.
2018-02-01
Limited visualization of semi-transparent structures in the eye remains a critical barrier to improving clinical outcomes and developing novel surgical techniques. While increases in imaging speed has enabled intraoperative optical coherence tomography (iOCT) imaging of surgical dynamics, several critical barriers to clinical adoption remain. Specifically, these include (1) static field-of-views (FOVs) requiring manual instrument-tracking; (2) high frame-rates require sparse sampling, which limits FOV; and (3) small iOCT FOV also limits the ability to co-register data with surgical microscopy. We previously addressed these limitations in image-guided ophthalmic microsurgery by developing microscope-integrated multimodal intraoperative swept-source spectrally encoded scanning laser ophthalmoscopy and optical coherence tomography. Complementary en face images enabled orientation and coregistration with the widefield surgical microscope view while OCT imaging enabled depth-resolved visualization of surgical instrument positions relative to anatomic structures-of-interest. In addition, we demonstrated novel integrated segmentation overlays for augmented-reality surgical guidance. Unfortunately, our previous system lacked the resolution and optical throughput for in vivo retinal imaging and necessitated removal of cornea and lens. These limitations were predominately a result of optical aberrations from imaging through a shared surgical microscope objective lens, which was modeled as a paraxial surface. Here, we present an optimized intraoperative spectrally encoded coherence tomography and reflectometry (iSECTR) system. We use a novel lens characterization method to develop an accurate model of surgical microscope objective performance and balance out inherent aberrations using iSECTR relay optics. Using this system, we demonstrate in vivo multimodal ophthalmic imaging through a surgical microscope
Quantum annealing correction with minor embedding
NASA Astrophysics Data System (ADS)
Vinci, Walter; Albash, Tameem; Paz-Silva, Gerardo; Hen, Itay; Lidar, Daniel A.
2015-10-01
Quantum annealing provides a promising route for the development of quantum optimization devices, but the usefulness of such devices will be limited in part by the range of implementable problems as dictated by hardware constraints. To overcome constraints imposed by restricted connectivity between qubits, a larger set of interactions can be approximated using minor embedding techniques whereby several physical qubits are used to represent a single logical qubit. However, minor embedding introduces new types of errors due to its approximate nature. We introduce and study quantum annealing correction schemes designed to improve the performance of quantum annealers in conjunction with minor embedding, thus leading to a hybrid scheme defined over an encoded graph. We argue that this scheme can be efficiently decoded using an energy minimization technique provided the density of errors does not exceed the per-site percolation threshold of the encoded graph. We test the hybrid scheme using a D-Wave Two processor on problems for which the encoded graph is a two-level grid and the Ising model is known to be NP-hard. The problems we consider are frustrated Ising model problem instances with "planted" (a priori known) solutions. Applied in conjunction with optimized energy penalties and decoding techniques, we find that this approach enables the quantum annealer to solve minor embedded instances with significantly higher success probability than it would without error correction. Our work demonstrates that quantum annealing correction can and should be used to improve the robustness of quantum annealing not only for natively embeddable problems but also when minor embedding is used to extend the connectivity of physical devices.
Kim, Dong-Sun; Kwon, Jin-San
2014-01-01
Research on real-time health systems have received great attention during recent years and the needs of high-quality personal multichannel medical signal compression for personal medical product applications are increasing. The international MPEG-4 audio lossless coding (ALS) standard supports a joint channel-coding scheme for improving compression performance of multichannel signals and it is very efficient compression method for multi-channel biosignals. However, the computational complexity of such a multichannel coding scheme is significantly greater than that of other lossless audio encoders. In this paper, we present a multichannel hardware encoder based on a low-complexity joint-coding technique and shared multiplier scheme for portable devices. A joint-coding decision method and a reference channel selection scheme are modified for a low-complexity joint coder. The proposed joint coding decision method determines the optimized joint-coding operation based on the relationship between the cross correlation of residual signals and the compression ratio. The reference channel selection is designed to select a channel for the entropy coding of the joint coding. The hardware encoder operates at a 40 MHz clock frequency and supports two-channel parallel encoding for the multichannel monitoring system. Experimental results show that the compression ratio increases by 0.06%, whereas the computational complexity decreases by 20.72% compared to the MPEG-4 ALS reference software encoder. In addition, the compression ratio increases by about 11.92%, compared to the single channel based bio-signal lossless data compressor. PMID:25237900
Cohen, Michael S.; Rissman, Jesse; Suthana, Nanthia A.; Castel, Alan D.; Knowlton, Barbara J.
2014-01-01
A number of prior fMRI studies have focused on the ways in which the midbrain dopaminergic reward system co-activates with hippocampus to potentiate memory for valuable items. However, another means by which people could selectively remember more valuable to-be-remembered items is to be selective in their use of effective but effortful encoding strategies. To broadly examine the neural mechanisms of value on subsequent memory, we used fMRI to examine how differences in brain activity at encoding as a function of value relate to subsequent free recall for words. Each word was preceded by an arbitrarily assigned point value, and participants went through multiple study-test cycles with feedback on their point total at the end of each list, allowing for sculpting of cognitive strategies. We examined the correlation between value-related modulation of brain activity and participants’ selectivity index, a measure of how close participants were to their optimal point total given the number of items recalled. Greater selectivity scores were associated with greater differences in activation of semantic processing regions, including left inferior frontal gyrus and left posterior lateral temporal cortex, during encoding of high-value words relative to low-value words. Although we also observed value-related modulation within midbrain and ventral striatal reward regions, our fronto-temporal findings suggest that strategic engagement of deep semantic processing may be an important mechanism for selectively encoding valuable items. PMID:24683066
Cohen, Michael S; Rissman, Jesse; Suthana, Nanthia A; Castel, Alan D; Knowlton, Barbara J
2014-06-01
A number of prior fMRI studies have focused on the ways in which the midbrain dopaminergic reward system coactivates with hippocampus to potentiate memory for valuable items. However, another means by which people could selectively remember more valuable to-be-remembered items is to be selective in their use of effective but effortful encoding strategies. To broadly examine the neural mechanisms of value on subsequent memory, we used fMRI to assess how differences in brain activity at encoding as a function of value relate to subsequent free recall for words. Each word was preceded by an arbitrarily assigned point value, and participants went through multiple study-test cycles with feedback on their point total at the end of each list, allowing for sculpting of cognitive strategies. We examined the correlation between value-related modulation of brain activity and participants' selectivity index, which measures how close participants were to their optimal point total, given the number of items recalled. Greater selectivity scores were associated with greater differences in the activation of semantic processing regions, including left inferior frontal gyrus and left posterior lateral temporal cortex, during the encoding of high-value words relative to low-value words. Although we also observed value-related modulation within midbrain and ventral striatal reward regions, our fronto-temporal findings suggest that strategic engagement of deep semantic processing may be an important mechanism for selectively encoding valuable items.
Hinson, Brian T; Morgansen, Kristi A
2015-10-06
The wings of the hawkmoth Manduca sexta are lined with mechanoreceptors called campaniform sensilla that encode wing deformations. During flight, the wings deform in response to a variety of stimuli, including inertial-elastic loads due to the wing flapping motion, aerodynamic loads, and exogenous inertial loads transmitted by disturbances. Because the wings are actuated, flexible structures, the strain-sensitive campaniform sensilla are capable of detecting inertial rotations and accelerations, allowing the wings to serve not only as a primary actuator, but also as a gyroscopic sensor for flight control. We study the gyroscopic sensing of the hawkmoth wings from a control theoretic perspective. Through the development of a low-order model of flexible wing flapping dynamics, and the use of nonlinear observability analysis, we show that the rotational acceleration inherent in wing flapping enables the wings to serve as gyroscopic sensors. We compute a measure of sensor fitness as a function of sensor location and directional sensitivity by using the simulation-based empirical observability Gramian. Our results indicate that gyroscopic information is encoded primarily through shear strain due to wing twisting, where inertial rotations cause detectable changes in pronation and supination timing and magnitude. We solve an observability-based optimal sensor placement problem to find the optimal configuration of strain sensor locations and directional sensitivities for detecting inertial rotations. The optimal sensor configuration shows parallels to the campaniform sensilla found on hawkmoth wings, with clusters of sensors near the wing root and wing tip. The optimal spatial distribution of strain directional sensitivity provides a hypothesis for how heterogeneity of campaniform sensilla may be distributed.
NASA Astrophysics Data System (ADS)
Chen, Gang; Yang, Bing; Zhang, Xiaoyun; Gao, Zhiyong
2017-07-01
The latest high efficiency video coding (HEVC) standard significantly increases the encoding complexity for improving its coding efficiency. Due to the limited computational capability of handheld devices, complexity constrained video coding has drawn great attention in recent years. A complexity control algorithm based on adaptive mode selection is proposed for interframe coding in HEVC. Considering the direct proportionality between encoding time and computational complexity, the computational complexity is measured in terms of encoding time. First, complexity is mapped to a target in terms of prediction modes. Then, an adaptive mode selection algorithm is proposed for the mode decision process. Specifically, the optimal mode combination scheme that is chosen through offline statistics is developed at low complexity. If the complexity budget has not been used up, an adaptive mode sorting method is employed to further improve coding efficiency. The experimental results show that the proposed algorithm achieves a very large complexity control range (as low as 10%) for the HEVC encoder while maintaining good rate-distortion performance. For the lowdelayP condition, compared with the direct resource allocation method and the state-of-the-art method, an average gain of 0.63 and 0.17 dB in BDPSNR is observed for 18 sequences when the target complexity is around 40%.
Cingulo-opercular activity affects incidental memory encoding for speech in noise.
Vaden, Kenneth I; Teubner-Rhodes, Susan; Ahlstrom, Jayne B; Dubno, Judy R; Eckert, Mark A
2017-08-15
Correctly understood speech in difficult listening conditions is often difficult to remember. A long-standing hypothesis for this observation is that the engagement of cognitive resources to aid speech understanding can limit resources available for memory encoding. This hypothesis is consistent with evidence that speech presented in difficult conditions typically elicits greater activity throughout cingulo-opercular regions of frontal cortex that are proposed to optimize task performance through adaptive control of behavior and tonic attention. However, successful memory encoding of items for delayed recognition memory tasks is consistently associated with increased cingulo-opercular activity when perceptual difficulty is minimized. The current study used a delayed recognition memory task to test competing predictions that memory encoding for words is enhanced or limited by the engagement of cingulo-opercular activity during challenging listening conditions. An fMRI experiment was conducted with twenty healthy adult participants who performed a word identification in noise task that was immediately followed by a delayed recognition memory task. Consistent with previous findings, word identification trials in the poorer signal-to-noise ratio condition were associated with increased cingulo-opercular activity and poorer recognition memory scores on average. However, cingulo-opercular activity decreased for correctly identified words in noise that were not recognized in the delayed memory test. These results suggest that memory encoding in difficult listening conditions is poorer when elevated cingulo-opercular activity is not sustained. Although increased attention to speech when presented in difficult conditions may detract from more active forms of memory maintenance (e.g., sub-vocal rehearsal), we conclude that task performance monitoring and/or elevated tonic attention supports incidental memory encoding in challenging listening conditions. Copyright © 2017 Elsevier Inc. All rights reserved.
Bone Collagen: New Clues to its Mineralization Mechanism From Recessive Osteogenesis Imperfecta
Eyre, David R.; Ann Weis, Mary
2013-01-01
Until 2006 the only mutations known to cause osteogenesis imperfecta (OI) were in the two genes coding for type I collagen chains. These dominant mutations affecting the expression or primary sequence of collagen α1(I) and α2(I) chains account for over 90% of OI cases. Since then a growing list of mutant genes causing the 5–10% of recessive cases has rapidly emerged. They include CRTAP, LEPRE1 and PPIB, which encode three proteins forming the prolyl 3-hydroxylase complex; PLOD2 and FKBP10, which encode respectively lysyl hydroxylase 2 and a foldase required for its activity in forming mature cross-links in bone collagen; SERPIN H1, which encodes the collagen chaperone HSP47; SERPIN F1, which encodes pigment epithelium-derived factor required for osteoid mineralization; and BMP1, which encodes the type I procollagen C-propeptidase. All cause fragile bone in infancy, which can include over-mineralization or under-mineralization defects as well as abnormal collagen post-translational modifications. Consistently both dominant and recessive variants lead to abnormal cross-linking chemistry in bone collagen. These recent discoveries strengthen the potential for a common pathogenic mechanism of misassembled collagen fibrils. Of the new genes identified, eight encode proteins required for collagen post-translational modification, chaperoning of newly synthesized collagen chains into native molecules or transport through the endoplasmic reticulum and Golgi for polymerization, cross-linking and mineralization. In reviewing these findings, we conclude that a common theme is emerging in the pathogenesis of brittle bone disease of mishandled collagen assembly with important insights on post-translational features of bone collagen that have evolved to optimize it as a biomineral template. PMID:23508630
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bartkiewicz, Karol; Miranowicz, Adam
We find an optimal quantum cloning machine, which clones qubits of arbitrary symmetrical distribution around the Bloch vector with the highest fidelity. The process is referred to as phase-independent cloning in contrast to the standard phase-covariant cloning for which an input qubit state is a priori better known. We assume that the information about the input state is encoded in an arbitrary axisymmetric distribution (phase function) on the Bloch sphere of the cloned qubits. We find analytical expressions describing the optimal cloning transformation and fidelity of the clones. As an illustration, we analyze cloning of qubit state described by themore » von Mises-Fisher and Brosseau distributions. Moreover, we show that the optimal phase-independent cloning machine can be implemented by modifying the mirror phase-covariant cloning machine for which quantum circuits are known.« less
Evolutionary Optimization of Yagi-Uda Antennas
NASA Technical Reports Server (NTRS)
Lohn, Jason D.; Kraus, William F.; Linden, Derek S.; Colombano, Silvano P.
2001-01-01
Yagi-Uda antennas are known to be difficult to design and optimize due to their sensitivity at high gain, and the inclusion of numerous parasitic elements. We present a genetic algorithm-based automated antenna optimization system that uses a fixed Yagi-Uda topology and a byte-encoded antenna representation. The fitness calculation allows the implicit relationship between power gain and sidelobe/backlobe loss to emerge naturally, a technique that is less complex than previous approaches. The genetic operators used are also simpler. Our results include Yagi-Uda antennas that have excellent bandwidth and gain properties with very good impedance characteristics. Results exceeded previous Yagi-Uda antennas produced via evolutionary algorithms by at least 7.8% in mainlobe gain. We also present encouraging preliminary results where a coevolutionary genetic algorithm is used.
Rodríguez-Moya, Javier; Argandoña, Montserrat; Reina-Bueno, Mercedes; Nieto, Joaquín J; Iglesias-Guerra, Fernando; Jebbar, Mohamed; Vargas, Carmen
2010-10-13
Osmosensing and associated signal transduction pathways have not yet been described in obligately halophilic bacteria. Chromohalobacter salexigens is a halophilic bacterium with a broad range of salt tolerance. In response to osmotic stress, it synthesizes and accumulates large amounts of the compatible solutes ectoine and hydroxyectoine. In a previous work, we showed that ectoines can be also accumulated upon transport from the external medium, and that they can be used as carbon sources at optimal, but not at low salinity. This was related to an insufficient ectoine(s) transport under these conditions. A C. salexigens Tn1732-induced mutant (CHR95) showed a delayed growth with glucose at low and optimal salinities, could not grow at high salinity, and was able to use ectoines as carbon sources at low salinity. CHR95 was affected in the transport and/or metabolism of glucose, and showed a deregulated ectoine uptake at any salinity, but it was not affected in ectoine metabolism. Transposon insertion in CHR95 caused deletion of three genes, Csal0865-Csal0867: acs, encoding an acetyl-CoA synthase, mntR, encoding a transcriptional regulator of the DtxR/MntR family, and eupR, encoding a putative two-component response regulator with a LuxR_C-like DNA-binding helix-turn-helix domain. A single mntR mutant was sensitive to manganese, suggesting that mntR encodes a manganese-dependent transcriptional regulator. Deletion of eupR led to salt-sensitivity and enabled the mutant strain to use ectoines as carbon source at low salinity. Domain analysis included EupR as a member of the NarL/FixJ family of two component response regulators. Finally, the protein encoded by Csal869, located three genes downstream of eupR was suggested to be the cognate histidine kinase of EupR. This protein was predicted to be a hybrid histidine kinase with one transmembrane and one cytoplasmic sensor domain. This work represents the first example of the involvement of a two-component response regulator in the osmoadaptation of a true halophilic bacterium. Our results pave the way to the elucidation of the signal transduction pathway involved in the control of ectoine transport in C. salexigens.
Virtual Environments for Soldier Training via Editable Demonstrations (VESTED)
2011-04-01
demonstrations as visual depictions of task performance, though sound and especially verbal communications involved with the task can also be essential...or any component cue alone (e.g., Janelle, Champenoy, Coombes , & Mousseau, 2003). Neurophysiology. Recent neurophysiological research has...provides insight about how VESTED functions, what features to modify should it yield less than optimal results, and how to encode, communicate and
USDA-ARS?s Scientific Manuscript database
The molecular biological techniques for plasmid-based assembly and cloning of synthetic assembled gene open reading frames are essential for elucidating the function of the proteins encoded by the genes. These techniques involve the production of full-length cDNA libraries as a source of plasmid-bas...
Integrated source and channel encoded digital communication system design study
NASA Technical Reports Server (NTRS)
Huth, G. K.; Trumpis, B. D.; Udalov, S.
1975-01-01
Various aspects of space shuttle communication systems were studied. The following major areas were investigated: burst error correction for shuttle command channels; performance optimization and design considerations for Costas receivers with and without bandpass limiting; experimental techniques for measuring low level spectral components of microwave signals; and potential modulation and coding techniques for the Ku-band return link. Results are presented.
Reading Faces: From Features to Recognition.
Guntupalli, J Swaroop; Gobbini, M Ida
2017-12-01
Chang and Tsao recently reported that the monkey face patch system encodes facial identity in a space of facial features as opposed to exemplars. Here, we discuss how such coding might contribute to face recognition, emphasizing the critical role of learning and interactions with other brain areas for optimizing the recognition of familiar faces. Copyright © 2017 Elsevier Ltd. All rights reserved.
Optimal Eye-Gaze Fixation Position for Face-Related Neural Responses
Zerouali, Younes; Lina, Jean-Marc; Jemel, Boutheina
2013-01-01
It is generally agreed that some features of a face, namely the eyes, are more salient than others as indexed by behavioral diagnosticity, gaze-fixation patterns and evoked-neural responses. However, because previous studies used unnatural stimuli, there is no evidence so far that the early encoding of a whole face in the human brain is based on the eyes or other facial features. To address this issue, scalp electroencephalogram (EEG) and eye gaze-fixations were recorded simultaneously in a gaze-contingent paradigm while observers viewed faces. We found that the N170 indexing the earliest face-sensitive response in the human brain was the largest when the fixation position is located around the nasion. Interestingly, for inverted faces, this optimal fixation position was more variable, but mainly clustered in the upper part of the visual field (around the mouth). These observations extend the findings of recent behavioral studies, suggesting that the early encoding of a face, as indexed by the N170, is not driven by the eyes per se, but rather arises from a general perceptual setting (upper-visual field advantage) coupled with the alignment of a face stimulus to a stored face template. PMID:23762224
Optimal eye-gaze fixation position for face-related neural responses.
Zerouali, Younes; Lina, Jean-Marc; Jemel, Boutheina
2013-01-01
It is generally agreed that some features of a face, namely the eyes, are more salient than others as indexed by behavioral diagnosticity, gaze-fixation patterns and evoked-neural responses. However, because previous studies used unnatural stimuli, there is no evidence so far that the early encoding of a whole face in the human brain is based on the eyes or other facial features. To address this issue, scalp electroencephalogram (EEG) and eye gaze-fixations were recorded simultaneously in a gaze-contingent paradigm while observers viewed faces. We found that the N170 indexing the earliest face-sensitive response in the human brain was the largest when the fixation position is located around the nasion. Interestingly, for inverted faces, this optimal fixation position was more variable, but mainly clustered in the upper part of the visual field (around the mouth). These observations extend the findings of recent behavioral studies, suggesting that the early encoding of a face, as indexed by the N170, is not driven by the eyes per se, but rather arises from a general perceptual setting (upper-visual field advantage) coupled with the alignment of a face stimulus to a stored face template.
Shang, Yonglei; Tesar, Devin; Hötzel, Isidro
2015-10-01
A recently described dual-host phage display vector that allows expression of immunoglobulin G (IgG) in mammalian cells bypasses the need for subcloning of phage display clone inserts to mammalian vectors for IgG expression in large antibody discovery and optimization campaigns. However, antibody discovery and optimization campaigns usually need different antibody formats for screening, requiring reformatting of the clones in the dual-host phage display vector to an alternative vector. We developed a modular protein expression system mediated by RNA trans-splicing to enable the expression of different antibody formats from the same phage display vector. The heavy-chain region encoded by the phage display vector is directly and precisely fused to different downstream heavy-chain sequences encoded by complementing plasmids simply by joining exons in different pre-mRNAs by trans-splicing. The modular expression system can be used to efficiently express structurally correct IgG and Fab fragments or other antibody formats from the same phage display clone in mammalian cells without clone reformatting. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Wang, Yang; Wu, Lin
2018-07-01
Low-Rank Representation (LRR) is arguably one of the most powerful paradigms for Multi-view spectral clustering, which elegantly encodes the multi-view local graph/manifold structures into an intrinsic low-rank self-expressive data similarity embedded in high-dimensional space, to yield a better graph partition than their single-view counterparts. In this paper we revisit it with a fundamentally different perspective by discovering LRR as essentially a latent clustered orthogonal projection based representation winged with an optimized local graph structure for spectral clustering; each column of the representation is fundamentally a cluster basis orthogonal to others to indicate its members, which intuitively projects the view-specific feature representation to be the one spanned by all orthogonal basis to characterize the cluster structures. Upon this finding, we propose our technique with the following: (1) We decompose LRR into latent clustered orthogonal representation via low-rank matrix factorization, to encode the more flexible cluster structures than LRR over primal data objects; (2) We convert the problem of LRR into that of simultaneously learning orthogonal clustered representation and optimized local graph structure for each view; (3) The learned orthogonal clustered representations and local graph structures enjoy the same magnitude for multi-view, so that the ideal multi-view consensus can be readily achieved. The experiments over multi-view datasets validate its superiority, especially over recent state-of-the-art LRR models. Copyright © 2018 Elsevier Ltd. All rights reserved.
Cecconi, Massimiliano; Parodi, Maria I.; Formisano, Francesco; Spirito, Paolo; Autore, Camillo; Musumeci, Maria B.; Favale, Stefano; Forleo, Cinzia; Rapezzi, Claudio; Biagini, Elena; Davì, Sabrina; Canepa, Elisabetta; Pennese, Loredana; Castagnetta, Mauro; Degiorgio, Dario; Coviello, Domenico A.
2016-01-01
Hypertrophic cardiomyopathy (HCM) is mainly associated with myosin, heavy chain 7 (MYH7) and myosin binding protein C, cardiac (MYBPC3) mutations. In order to better explain the clinical and genetic heterogeneity in HCM patients, in this study, we implemented a target-next generation sequencing (NGS) assay. An Ion AmpliSeq™ Custom Panel for the enrichment of 19 genes, of which 9 of these did not encode thick/intermediate and thin myofilament (TTm) proteins and, among them, 3 responsible of HCM phenocopy, was created. Ninety-two DNA samples were analyzed by the Ion Personal Genome Machine: 73 DNA samples (training set), previously genotyped in some of the genes by Sanger sequencing, were used to optimize the NGS strategy, whereas 19 DNA samples (discovery set) allowed the evaluation of NGS performance. In the training set, we identified 72 out of 73 expected mutations and 15 additional mutations: the molecular diagnosis was achieved in one patient with a previously wild-type status and the pre-excitation syndrome was explained in another. In the discovery set, we identified 20 mutations, 5 of which were in genes encoding non-TTm proteins, increasing the diagnostic yield by approximately 20%: a single mutation in genes encoding non-TTm proteins was identified in 2 out of 3 borderline HCM patients, whereas co-occuring mutations in genes encoding TTm and galactosidase alpha (GLA) altered proteins were characterized in a male with HCM and multiorgan dysfunction. Our combined targeted NGS-Sanger sequencing-based strategy allowed the molecular diagnosis of HCM with greater efficiency than using the conventional (Sanger) sequencing alone. Mutant alleles encoding non-TTm proteins may aid in the complete understanding of the genetic and phenotypic heterogeneity of HCM: co-occuring mutations of genes encoding TTm and non-TTm proteins could explain the wide variability of the HCM phenotype, whereas mutations in genes encoding only the non-TTm proteins are identifiable in patients with a milder HCM status. PMID:27600940
Man, Zaiwei; Rao, Zhiming; Xu, Meijuan; Guo, Jing; Yang, Taowei; Zhang, Xian; Xu, Zhenghong
2016-11-01
l-arginine, a semi essential amino acid, is an important amino acid in food flavoring and pharmaceutical industries. Its production by microbial fermentation is gaining more and more attention. In previous work, we obtained a new l-arginine producing Corynebacterium crenatum (subspecies of Corynebacterium glutamicum) through mutation breeding. In this work, we enhanced l-arginine production through improvement of the intracellular environment. First, two NAD(P)H-dependent H 2 O 2 -forming flavin reductases Frd181 (encoded by frd1 gene) and Frd188 (encoded by frd2) in C. glutamicum were identified for the first time. Next, the roles of Frd181 and Frd188 in C. glutamicum were studied by overexpression and deletion of the encoding genes, and the results showed that the inactivation of Frd181 and Frd188 was beneficial for cell growth and l-arginine production, owing to the decreased H 2 O 2 synthesis and intracellular reactive oxygen species (ROS) level, and increased intracellular NADH and ATP levels. Then, the ATP level was further increased by deletion of noxA (encoding NADH oxidase) and amn (encoding AMP nucleosidase), and overexpression of pgk (encoding 3-phosphoglycerate kinase) and pyk (encoding pyruvate kinase), and the l-arginine production and yield from glucose were significantly increased. In fed-batch fermentation, the l-arginine production and yield from glucose of the final strain reached 57.3g/L and 0.326g/g, respectively, which were 49.2% and 34.2% higher than those of the parent strain, respectively. ROS and ATP are important elements of the intracellular environment, and l-arginine biosynthesis requires a large amount of ATP. For the first time, we enhanced l-arginine production and yield from glucose through reducing the H 2 O 2 synthesis and increasing the ATP supply. Copyright © 2016 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Dao, Son Duy; Abhary, Kazem; Marian, Romeo
2017-06-01
Integration of production planning and scheduling is a class of problems commonly found in manufacturing industry. This class of problems associated with precedence constraint has been previously modeled and optimized by the authors, in which, it requires a multidimensional optimization at the same time: what to make, how many to make, where to make and the order to make. It is a combinatorial, NP-hard problem, for which no polynomial time algorithm is known to produce an optimal result on a random graph. In this paper, the further development of Genetic Algorithm (GA) for this integrated optimization is presented. Because of the dynamic nature of the problem, the size of its solution is variable. To deal with this variability and find an optimal solution to the problem, GA with new features in chromosome encoding, crossover, mutation, selection as well as algorithm structure is developed herein. With the proposed structure, the proposed GA is able to "learn" from its experience. Robustness of the proposed GA is demonstrated by a complex numerical example in which performance of the proposed GA is compared with those of three commercial optimization solvers.
A two-objective optimization scheme for high-OSNR and low-power-consuming all-optical networks
NASA Astrophysics Data System (ADS)
Abedifar, Vahid; Mirjalili, Seyed Mohammad; Eshghi, Mohammad
2015-01-01
In all-optical networks the ASE noise of the utilized optical power amplifiers is a major impairment, making the OSNR to be the dominant parameter in QoS. In this paper, a two-objective optimization scheme using Multi-Objective Particle Swarm Optimization (MOPSO) is proposed to reach the maximum OSNR for all channels while the optical power consumed by EDFAs and lasers is minimized. Two scenarios are investigated: Scenario 1 and Scenario 2. The former scenario optimizes the gain values of a predefined number of EDFAs in physical links. The gain values may be different from each other. The latter scenario optimizes the gains value of EDFAs (which is supposed to be identical in each physical link) in addition to the number of EDFAs for each physical link. In both scenarios, the launch powers of the lasers are also taken into account during optimization process. Two novel encoding methods are proposed to uniquely represent the problem solutions. Two virtual demand sets are considered for evaluation of the performance of the proposed optimization scheme. The simulations results are described for both scenarios and both virtual demands.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schäfer, Joachim; Karpov, Evgueni; Cerf, Nicolas J.
2014-12-04
We seek for a realistic implementation of multimode Gaussian entangled states that can realize the optimal encoding for quantum bosonic Gaussian channels with memory. For a Gaussian channel with classical additive Markovian correlated noise and a lossy channel with non-Markovian correlated noise, we demonstrate the usefulness using Gaussian matrix-product states (GMPS). These states can be generated sequentially, and may, in principle, approximate well any Gaussian state. We show that we can achieve up to 99.9% of the classical Gaussian capacity with GMPS requiring squeezing parameters that are reachable with current technology. This may offer a way towards an experimental realization.
On the possible roles of microsaccades and drifts in visual perception.
Ahissar, Ehud; Arieli, Amos; Fried, Moshe; Bonneh, Yoram
2016-01-01
During natural viewing large saccades shift the visual gaze from one target to another every few hundreds of milliseconds. The role of microsaccades (MSs), small saccades that show up during long fixations, is still debated. A major debate is whether MSs are used to redirect the visual gaze to a new location or to encode visual information through their movement. We argue that these two functions cannot be optimized simultaneously and present several pieces of evidence suggesting that MSs redirect the visual gaze and that the visual details are sampled and encoded by ocular drifts. We show that drift movements are indeed suitable for visual encoding. Yet, it is not clear to what extent drift movements are controlled by the visual system, and to what extent they interact with saccadic movements. We analyze several possible control schemes for saccadic and drift movements and propose experiments that can discriminate between them. We present the results of preliminary analyses of existing data as a sanity check to the testability of our predictions. Copyright © 2014 Elsevier Ltd. All rights reserved.
Ievlev, Anton; Kalinin, Sergei V.
2015-05-28
Ferroelectric materials are broadly considered for information storage due to extremely high storage and information processing densities they enable. To date, ferroelectric based data storage has invariably relied on formation of cylindrical domains, allowing for binary information encoding. Here we demonstrate and explore the potential of high-density encoding based on domain morphology. We explore the domain morphogenesis during the tip-induced polarization switching by sequences of positive and negative pulses in a lithium niobate single-crystal and demonstrate the principal of information coding by shape and size of the domains. We applied cross-correlation and neural network approaches for recognition of the switchingmore » sequence by the shape of the resulting domains and establish optimal parameters for domain shape recognition. These studies both provide insight into the highly non-trivial mechanism of domain switching and potentially establish a new paradigm for multilevel information storage and content retrieval memories. Furthermore, this approach opens a pathway to exploration of domain switching mechanisms via shape analysis.« less
Oculomotor preparation as a rehearsal mechanism in spatial working memory.
Pearson, David G; Ball, Keira; Smith, Daniel T
2014-09-01
There is little consensus regarding the specific processes responsible for encoding, maintenance, and retrieval of information in visuo-spatial working memory (VSWM). One influential theory is that VSWM may involve activation of the eye-movement (oculomotor) system. In this study we experimentally prevented healthy participants from planning or executing saccadic eye-movements during the encoding, maintenance, and retrieval stages of visual and spatial working memory tasks. Participants experienced a significant reduction in spatial memory span only when oculomotor preparation was prevented during encoding or maintenance. In contrast there was no reduction when oculomotor preparation was prevented only during retrieval. These results show that (a) involvement of the oculomotor system is necessary for optimal maintenance of directly-indicated locations in spatial working memory and (b) oculomotor preparation is not necessary during retrieval from spatial working memory. We propose that this study is the first to unambiguously demonstrate that the oculomotor system contributes to the maintenance of spatial locations in working memory independently from the involvement of covert attention. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
Possenti, Andrea; Vendruscolo, Michele; Camilloni, Carlo; Tiana, Guido
2018-05-23
Proteins employ the information stored in the genetic code and translated into their sequences to carry out well-defined functions in the cellular environment. The possibility to encode for such functions is controlled by the balance between the amount of information supplied by the sequence and that left after that the protein has folded into its structure. We study the amount of information necessary to specify the protein structure, providing an estimate that keeps into account the thermodynamic properties of protein folding. We thus show that the information remaining in the protein sequence after encoding for its structure (the 'information gap') is very close to what needed to encode for its function and interactions. Then, by predicting the information gap directly from the protein sequence, we show that it may be possible to use these insights from information theory to discriminate between ordered and disordered proteins, to identify unknown functions, and to optimize artificially-designed protein sequences. This article is protected by copyright. All rights reserved. © 2018 Wiley Periodicals, Inc.
Satz, Alexander L; Hochstrasser, Remo; Petersen, Ann C
2017-04-10
To optimize future DNA-encoded library design, we have attempted to quantify the library size at which the signal becomes undetectable. To accomplish this we (i) have calculated that percent yields of individual library members following a screen range from 0.002 to 1%, (ii) extrapolated that ∼1 million copies per library member are required at the outset of a screen, and (iii) from this extrapolation predict that false negative rates will begin to outweigh the benefit of increased diversity at library sizes >10 8 . The above analysis is based upon a large internal data set comprising multiple screens, targets, and libraries; we also augmented our internal data with all currently available literature data. In theory, high false negative rates may be overcome by employing larger amounts of library; however, we argue that using more than currently reported amounts of library (≫10 nmoles) is impractical. The above conclusions may be generally applicable to other DNA encoded library platforms, particularly those platforms that do not allow for library amplification.
Guo, Mei; Lu, Fuping; Pu, Jun; Bai, Dongqing; Du, Lianxiang
2005-11-01
A cDNA encoding for laccase was isolated from the ligninolytic fungus Trametes versicolor by RNA-PCR. The cDNA corresponds to the gene Lcc1, which encodes a laccase isoenzyme of 498 amino acid residues preceded by a 22-residue signal peptide. The Lcc1 cDNA was cloned into the vectors pMETA and pMETalphaA and expressed in Pichia methanolica. The laccase activity obtained with the Saccharomyces cerevisiae alpha-factor signal peptide was found to be twofold higher than that obtained with the native secretion signal peptide. The extracellular laccase activity in recombinants with the alpha-factor signal peptide was 9.79 U ml(-1). The presence of 0.2 mM copper was necessary for optimal activity of laccase. The expression level was favoured by lower cultivation temperature. The identity of the recombinant protein was further confirmed by immunodetection using Western blot analysis. As expected, the molecular mass of the mature laccase was 64.0 kDa, similar to that of the native form.
Information theoretical assessment of visual communication with subband coding
NASA Astrophysics Data System (ADS)
Rahman, Zia-ur; Fales, Carl L.; Huck, Friedrich O.
1994-09-01
A well-designed visual communication channel is one which transmits the most information about a radiance field with the fewest artifacts. The role of image processing, encoding and restoration is to improve the quality of visual communication channels by minimizing the error in the transmitted data. Conventionally this role has been analyzed strictly in the digital domain neglecting the effects of image-gathering and image-display devices on the quality of the image. This results in the design of a visual communication channel which is `suboptimal.' We propose an end-to-end assessment of the imaging process which incorporates the influences of these devices in the design of the encoder and the restoration process. This assessment combines Shannon's communication theory with Wiener's restoration filter and with the critical design factors of the image gathering and display devices, thus providing the metrics needed to quantify and optimize the end-to-end performance of the visual communication channel. Results show that the design of the image-gathering device plays a significant role in determining the quality of the visual communication channel and in designing the analysis filters for subband encoding.
HEVC optimizations for medical environments
NASA Astrophysics Data System (ADS)
Fernández, D. G.; Del Barrio, A. A.; Botella, Guillermo; García, Carlos; Meyer-Baese, Uwe; Meyer-Baese, Anke
2016-05-01
HEVC/H.265 is the most interesting and cutting-edge topic in the world of digital video compression, allowing to reduce by half the required bandwidth in comparison with the previous H.264 standard. Telemedicine services and in general any medical video application can benefit from the video encoding advances. However, the HEVC is computationally expensive to implement. In this paper a method for reducing the HEVC complexity in the medical environment is proposed. The sequences that are typically processed in this context contain several homogeneous regions. Leveraging these regions, it is possible to simplify the HEVC flow while maintaining a high-level quality. In comparison with the HM16.2 standard, the encoding time is reduced up to 75%, with a negligible quality loss. Moreover, the algorithm is straightforward to implement in any hardware platform.
NASA Astrophysics Data System (ADS)
Cascio, David M.
1988-05-01
States of nature or observed data are often stochastically modelled as Gaussian random variables. At times it is desirable to transmit this information from a source to a destination with minimal distortion. Complicating this objective is the possible presence of an adversary attempting to disrupt this communication. In this report, solutions are provided to a class of minimax and maximin decision problems, which involve the transmission of a Gaussian random variable over a communications channel corrupted by both additive Gaussian noise and probabilistic jamming noise. The jamming noise is termed probabilistic in the sense that with nonzero probability 1-P, the jamming noise is prevented from corrupting the channel. We shall seek to obtain optimal linear encoder-decoder policies which minimize given quadratic distortion measures.
SeaQuaKE: Sea-optimized Quantum Key Exchange
2015-01-01
of photon pairs in both polarization [3] and time-bin [4] degrees of freedom simultaneously. Entanglement analysis components in both the...greater throughput per entangled photon pair compared to alternative sources that encode in only a Photon -pair source Time-bin entanglement ...Polarization Entanglement & Pair Generation Hyperentangled Photon Pair Source •Wavelength availability • Power • Pulse rate Time-bin Mux • Waveguide vs
Multiscale 3-D shape representation and segmentation using spherical wavelets.
Nain, Delphine; Haker, Steven; Bobick, Aaron; Tannenbaum, Allen
2007-04-01
This paper presents a novel multiscale shape representation and segmentation algorithm based on the spherical wavelet transform. This work is motivated by the need to compactly and accurately encode variations at multiple scales in the shape representation in order to drive the segmentation and shape analysis of deep brain structures, such as the caudate nucleus or the hippocampus. Our proposed shape representation can be optimized to compactly encode shape variations in a population at the needed scale and spatial locations, enabling the construction of more descriptive, nonglobal, nonuniform shape probability priors to be included in the segmentation and shape analysis framework. In particular, this representation addresses the shortcomings of techniques that learn a global shape prior at a single scale of analysis and cannot represent fine, local variations in a population of shapes in the presence of a limited dataset. Specifically, our technique defines a multiscale parametric model of surfaces belonging to the same population using a compact set of spherical wavelets targeted to that population. We further refine the shape representation by separating into groups wavelet coefficients that describe independent global and/or local biological variations in the population, using spectral graph partitioning. We then learn a prior probability distribution induced over each group to explicitly encode these variations at different scales and spatial locations. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior for segmentation. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to two different brain structures, the caudate nucleus and the hippocampus, of interest in the study of schizophrenia. We show: 1) a reconstruction task of a test set to validate the expressiveness of our multiscale prior and 2) a segmentation task. In the reconstruction task, our results show that for a given training set size, our algorithm significantly improves the approximation of shapes in a testing set over the Point Distribution Model, which tends to oversmooth data. In the segmentation task, our validation shows our algorithm is computationally efficient and outperforms the Active Shape Model algorithm, by capturing finer shape details.
Multiscale 3-D Shape Representation and Segmentation Using Spherical Wavelets
Nain, Delphine; Haker, Steven; Bobick, Aaron
2013-01-01
This paper presents a novel multiscale shape representation and segmentation algorithm based on the spherical wavelet transform. This work is motivated by the need to compactly and accurately encode variations at multiple scales in the shape representation in order to drive the segmentation and shape analysis of deep brain structures, such as the caudate nucleus or the hippocampus. Our proposed shape representation can be optimized to compactly encode shape variations in a population at the needed scale and spatial locations, enabling the construction of more descriptive, nonglobal, nonuniform shape probability priors to be included in the segmentation and shape analysis framework. In particular, this representation addresses the shortcomings of techniques that learn a global shape prior at a single scale of analysis and cannot represent fine, local variations in a population of shapes in the presence of a limited dataset. Specifically, our technique defines a multiscale parametric model of surfaces belonging to the same population using a compact set of spherical wavelets targeted to that population. We further refine the shape representation by separating into groups wavelet coefficients that describe independent global and/or local biological variations in the population, using spectral graph partitioning. We then learn a prior probability distribution induced over each group to explicitly encode these variations at different scales and spatial locations. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior for segmentation. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to two different brain structures, the caudate nucleus and the hippocampus, of interest in the study of schizophrenia. We show: 1) a reconstruction task of a test set to validate the expressiveness of our multiscale prior and 2) a segmentation task. In the reconstruction task, our results show that for a given training set size, our algorithm significantly improves the approximation of shapes in a testing set over the Point Distribution Model, which tends to oversmooth data. In the segmentation task, our validation shows our algorithm is computationally efficient and outperforms the Active Shape Model algorithm, by capturing finer shape details. PMID:17427745
Design, Optimization and Application of Small Molecule Biosensor in Metabolic Engineering.
Liu, Yang; Liu, Ye; Wang, Meng
2017-01-01
The development of synthetic biology and metabolic engineering has painted a great future for the bio-based economy, including fuels, chemicals, and drugs produced from renewable feedstocks. With the rapid advance of genome-scale modeling, pathway assembling and genome engineering/editing, our ability to design and generate microbial cell factories with various phenotype becomes almost limitless. However, our lack of ability to measure and exert precise control over metabolite concentration related phenotypes becomes a bottleneck in metabolic engineering. Genetically encoded small molecule biosensors, which provide the means to couple metabolite concentration to measurable or actionable outputs, are highly promising solutions to the bottleneck. Here we review recent advances in the design, optimization and application of small molecule biosensor in metabolic engineering, with particular focus on optimization strategies for transcription factor (TF) based biosensors.
Design, Optimization and Application of Small Molecule Biosensor in Metabolic Engineering
Liu, Yang; Liu, Ye; Wang, Meng
2017-01-01
The development of synthetic biology and metabolic engineering has painted a great future for the bio-based economy, including fuels, chemicals, and drugs produced from renewable feedstocks. With the rapid advance of genome-scale modeling, pathway assembling and genome engineering/editing, our ability to design and generate microbial cell factories with various phenotype becomes almost limitless. However, our lack of ability to measure and exert precise control over metabolite concentration related phenotypes becomes a bottleneck in metabolic engineering. Genetically encoded small molecule biosensors, which provide the means to couple metabolite concentration to measurable or actionable outputs, are highly promising solutions to the bottleneck. Here we review recent advances in the design, optimization and application of small molecule biosensor in metabolic engineering, with particular focus on optimization strategies for transcription factor (TF) based biosensors. PMID:29089935
From samples to populations in retinex models
NASA Astrophysics Data System (ADS)
Gianini, Gabriele
2017-05-01
Some spatial color algorithms, such as Brownian Milano retinex (MI-retinex) and random spray retinex (RSR), are based on sampling. In Brownian MI-retinex, memoryless random walks (MRWs) explore the neighborhood of a pixel and are then used to compute its output. Considering the relative redundancy and inefficiency of MRW exploration, the algorithm RSR replaced the walks by samples of points (the sprays). Recent works point to the fact that a mapping from the sampling formulation to the probabilistic formulation of the corresponding sampling process can offer useful insights into the models, at the same time featuring intrinsically noise-free outputs. The paper continues the development of this concept and shows that the population-based versions of RSR and Brownian MI-retinex can be used to obtain analytical expressions for the outputs of some test images. The comparison of the two analytic expressions from RSR and from Brownian MI-retinex demonstrates not only that the two outputs are, in general, different but also that they depend in a qualitatively different way upon the features of the image.
Rullan, Marc; Benzinger, Dirk; Schmidt, Gregor W; Milias-Argeitis, Andreas; Khammash, Mustafa
2018-05-17
Transcription is a highly regulated and inherently stochastic process. The complexity of signal transduction and gene regulation makes it challenging to analyze how the dynamic activity of transcriptional regulators affects stochastic transcription. By combining a fast-acting, photo-regulatable transcription factor with nascent RNA quantification in live cells and an experimental setup for precise spatiotemporal delivery of light inputs, we constructed a platform for the real-time, single-cell interrogation of transcription in Saccharomyces cerevisiae. We show that transcriptional activation and deactivation are fast and memoryless. By analyzing the temporal activity of individual cells, we found that transcription occurs in bursts, whose duration and timing are modulated by transcription factor activity. Using our platform, we regulated transcription via light-driven feedback loops at the single-cell level. Feedback markedly reduced cell-to-cell variability and led to qualitative differences in cellular transcriptional dynamics. Our platform establishes a flexible method for studying transcriptional dynamics in single cells. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Multi-stage decoding for multi-level block modulation codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao
1991-01-01
Various types of multistage decoding for multilevel block modulation codes, in which the decoding of a component code at each stage can be either soft decision or hard decision, maximum likelihood or bounded distance are discussed. Error performance for codes is analyzed for a memoryless additive channel based on various types of multi-stage decoding, and upper bounds on the probability of an incorrect decoding are derived. It was found that, if component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. It was found that the difference in performance between the suboptimum multi-stage soft decision maximum likelihood decoding of a modulation code and the single stage optimum decoding of the overall code is very small, only a fraction of dB loss in SNR at the probability of an incorrect decoding for a block of 10(exp -6). Multi-stage decoding of multi-level modulation codes really offers a way to achieve the best of three worlds, bandwidth efficiency, coding gain, and decoding complexity.
Radiation Transport in Random Media With Large Fluctuations
NASA Astrophysics Data System (ADS)
Olson, Aaron; Prinja, Anil; Franke, Brian
2017-09-01
Neutral particle transport in media exhibiting large and complex material property spatial variation is modeled by representing cross sections as lognormal random functions of space and generated through a nonlinear memory-less transformation of a Gaussian process with covariance uniquely determined by the covariance of the cross section. A Karhunen-Loève decomposition of the Gaussian process is implemented to effciently generate realizations of the random cross sections and Woodcock Monte Carlo used to transport particles on each realization and generate benchmark solutions for the mean and variance of the particle flux as well as probability densities of the particle reflectance and transmittance. A computationally effcient stochastic collocation method is implemented to directly compute the statistical moments such as the mean and variance, while a polynomial chaos expansion in conjunction with stochastic collocation provides a convenient surrogate model that also produces probability densities of output quantities of interest. Extensive numerical testing demonstrates that use of stochastic reduced-order modeling provides an accurate and cost-effective alternative to random sampling for particle transport in random media.
NASA Astrophysics Data System (ADS)
Boche, H.; Janßen, G.
2014-08-01
We consider one-way quantum state merging and entanglement distillation under compound and arbitrarily varying source models. Regarding quantum compound sources, where the source is memoryless, but the source state an unknown member of a certain set of density matrices, we continue investigations begun in the work of Bjelaković et al. ["Universal quantum state merging," J. Math. Phys. 54, 032204 (2013)] and determine the classical as well as entanglement cost of state merging. We further investigate quantum state merging and entanglement distillation protocols for arbitrarily varying quantum sources (AVQS). In the AVQS model, the source state is assumed to vary in an arbitrary manner for each source output due to environmental fluctuations or adversarial manipulation. We determine the one-way entanglement distillation capacity for AVQS, where we invoke the famous robustification and elimination techniques introduced by Ahlswede. Regarding quantum state merging for AVQS we show by example that the robustification and elimination based approach generally leads to suboptimal entanglement as well as classical communication rates.
Jafari, Masoumeh; Salimifard, Maryam; Dehghani, Maryam
2014-07-01
This paper presents an efficient method for identification of nonlinear Multi-Input Multi-Output (MIMO) systems in the presence of colored noises. The method studies the multivariable nonlinear Hammerstein and Wiener models, in which, the nonlinear memory-less block is approximated based on arbitrary vector-based basis functions. The linear time-invariant (LTI) block is modeled by an autoregressive moving average with exogenous (ARMAX) model which can effectively describe the moving average noises as well as the autoregressive and the exogenous dynamics. According to the multivariable nature of the system, a pseudo-linear-in-the-parameter model is obtained which includes two different kinds of unknown parameters, a vector and a matrix. Therefore, the standard least squares algorithm cannot be applied directly. To overcome this problem, a Hierarchical Least Squares Iterative (HLSI) algorithm is used to simultaneously estimate the vector and the matrix of unknown parameters as well as the noises. The efficiency of the proposed identification approaches are investigated through three nonlinear MIMO case studies. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Datta, Dibyadyuti; Bansal, Geetha P; Gerloff, Dietlind L; Ellefsen, Barry; Hannaman, Drew; Kumar, Nirbhay
2017-01-05
Pfs48/45 and Pfs25 are leading candidates for the development of Plasmodium falciparum transmission blocking vaccines (TBV). Expression of Pfs48/45 in the erythrocytic sexual stages and presentation to the immune system during infection in the human host also makes it ideal for natural boosting. However, it has been challenging to produce a fully folded, functionally active Pfs48/45, using various protein expression platforms. In this study, we demonstrate that full-length Pfs48/45 encoded by DNA plasmids is able to induce significant transmission reducing immune responses. DNA plasmids encoding Pfs48/45 based on native (WT), codon optimized (SYN), or codon optimized and mutated (MUT1 and MUT2), to prevent any asparagine (N)-linked glycosylation were compared with or without intramuscular electroporation (EP). EP significantly enhanced antibody titers and transmission blocking activity elicited by immunization with SYN Pfs48/45 DNA vaccine. Mosquito membrane feeding assays also revealed improved functional immunogenicity of SYN Pfs48/45 (N-glycosylation sites intact) as compared to MUT1 or MUT2 Pfs48/45 DNA plasmids (all N-glycosylation sites mutated). Boosting with recombinant Pfs48/45 protein after immunization with each of the different DNA vaccines resulted in significant boosting of antibody response and improved transmission reducing capabilities of all four DNA vaccines. Finally, immunization with a combination of DNA plasmids (SYN Pfs48/45 and SYN Pfs25) also provides support for the possibility of combining antigens targeting different life cycle stages in the parasite during transmission through mosquitoes. Copyright © 2016 Elsevier Ltd. All rights reserved.
Murakami, Taira; Kanai, Tamotsu; Takata, Hiroki; Kuriki, Takashi; Imanaka, Tadayuki
2006-01-01
Branching enzyme (BE) catalyzes formation of the branch points in glycogen and amylopectin by cleavage of the α-1,4 linkage and its subsequent transfer to the α-1,6 position. We have identified a novel BE encoded by an uncharacterized open reading frame (TK1436) of the hyperthermophilic archaeon Thermococcus kodakaraensis KOD1. TK1436 encodes a conserved protein showing similarity to members of glycoside hydrolase family 57 (GH-57 family). At the C terminus of the TK1436 protein, two copies of a helix-hairpin-helix (HhH) motif were found. TK1436 orthologs are distributed in archaea of the order Thermococcales, cyanobacteria, some actinobacteria, and a few other bacterial species. When recombinant TK1436 protein was incubated with amylose used as the substrate, a product peak was detected by high-performance anion-exchange chromatography, eluting more slowly than the substrate. Isoamylase treatment of the reaction mixture significantly increased the level of short-chain α-glucans, indicating that the reaction product contained many α-1,6 branching points. The TK1436 protein showed an optimal pH of 7.0, an optimal temperature of 70°C, and thermostability up to 90°C, as determined by the iodine-staining assay. These properties were the same when a protein devoid of HhH motifs (the TK1436ΔH protein) was used. The average molecular weight of branched glucan after reaction with the TK1436ΔH protein was over 100 times larger than that of the starting substrate. These results clearly indicate that TK1436 encodes a structurally novel BE belonging to the GH-57 family. Identification of an overlooked BE species provides new insights into glycogen biosynthesis in microorganisms. PMID:16885460
Energy Minimization of Discrete Protein Titration State Models Using Graph Theory.
Purvine, Emilie; Monson, Kyle; Jurrus, Elizabeth; Star, Keith; Baker, Nathan A
2016-08-25
There are several applications in computational biophysics that require the optimization of discrete interacting states, for example, amino acid titration states, ligand oxidation states, or discrete rotamer angles. Such optimization can be very time-consuming as it scales exponentially in the number of sites to be optimized. In this paper, we describe a new polynomial time algorithm for optimization of discrete states in macromolecular systems. This algorithm was adapted from image processing and uses techniques from discrete mathematics and graph theory to restate the optimization problem in terms of "maximum flow-minimum cut" graph analysis. The interaction energy graph, a graph in which vertices (amino acids) and edges (interactions) are weighted with their respective energies, is transformed into a flow network in which the value of the minimum cut in the network equals the minimum free energy of the protein and the cut itself encodes the state that achieves the minimum free energy. Because of its deterministic nature and polynomial time performance, this algorithm has the potential to allow for the ionization state of larger proteins to be discovered.
Energy Minimization of Discrete Protein Titration State Models Using Graph Theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Purvine, Emilie AH; Monson, Kyle E.; Jurrus, Elizabeth R.
There are several applications in computational biophysics which require the optimization of discrete interacting states; e.g., amino acid titration states, ligand oxidation states, or discrete rotamer angles. Such optimization can be very time-consuming as it scales exponentially in the number of sites to be optimized. In this paper, we describe a new polynomial-time algorithm for optimization of discrete states in macromolecular systems. This algorithm was adapted from image processing and uses techniques from discrete mathematics and graph theory to restate the optimization problem in terms of maximum flow-minimum cut graph analysis. The interaction energy graph, a graph in which verticesmore » (amino acids) and edges (interactions) are weighted with their respective energies, is transformed into a flow network in which the value of the minimum cut in the network equals the minimum free energy of the protein, and the cut itself encodes the state that achieves the minimum free energy. Because of its deterministic nature and polynomial-time performance, this algorithm has the potential to allow for the ionization state of larger proteins to be discovered.« less
Energy Minimization of Discrete Protein Titration State Models Using Graph Theory
Purvine, Emilie; Monson, Kyle; Jurrus, Elizabeth; Star, Keith; Baker, Nathan A.
2016-01-01
There are several applications in computational biophysics which require the optimization of discrete interacting states; e.g., amino acid titration states, ligand oxidation states, or discrete rotamer angles. Such optimization can be very time-consuming as it scales exponentially in the number of sites to be optimized. In this paper, we describe a new polynomial-time algorithm for optimization of discrete states in macromolecular systems. This algorithm was adapted from image processing and uses techniques from discrete mathematics and graph theory to restate the optimization problem in terms of “maximum flow-minimum cut” graph analysis. The interaction energy graph, a graph in which vertices (amino acids) and edges (interactions) are weighted with their respective energies, is transformed into a flow network in which the value of the minimum cut in the network equals the minimum free energy of the protein, and the cut itself encodes the state that achieves the minimum free energy. Because of its deterministic nature and polynomial-time performance, this algorithm has the potential to allow for the ionization state of larger proteins to be discovered. PMID:27089174
Optimization of lattice surgery is NP-hard
NASA Astrophysics Data System (ADS)
Herr, Daniel; Nori, Franco; Devitt, Simon J.
2017-09-01
The traditional method for computation in either the surface code or in the Raussendorf model is the creation of holes or "defects" within the encoded lattice of qubits that are manipulated via topological braiding to enact logic gates. However, this is not the only way to achieve universal, fault-tolerant computation. In this work, we focus on the lattice surgery representation, which realizes transversal logic operations without destroying the intrinsic 2D nearest-neighbor properties of the braid-based surface code and achieves universality without defects and braid-based logic. For both techniques there are open questions regarding the compilation and resource optimization of quantum circuits. Optimization in braid-based logic is proving to be difficult and the classical complexity associated with this problem has yet to be determined. In the context of lattice-surgery-based logic, we can introduce an optimality condition, which corresponds to a circuit with the lowest resource requirements in terms of physical qubits and computational time, and prove that the complexity of optimizing a quantum circuit in the lattice surgery model is NP-hard.
Comparative analyses of two Geraniaceae transcriptomes using next-generation sequencing.
Zhang, Jin; Ruhlman, Tracey A; Mower, Jeffrey P; Jansen, Robert K
2013-12-29
Organelle genomes of Geraniaceae exhibit several unusual evolutionary phenomena compared to other angiosperm families including accelerated nucleotide substitution rates, widespread gene loss, reduced RNA editing, and extensive genomic rearrangements. Since most organelle-encoded proteins function in multi-subunit complexes that also contain nuclear-encoded proteins, it is likely that the atypical organellar phenomena affect the evolution of nuclear genes encoding organellar proteins. To begin to unravel the complex co-evolutionary interplay between organellar and nuclear genomes in this family, we sequenced nuclear transcriptomes of two species, Geranium maderense and Pelargonium x hortorum. Normalized cDNA libraries of G. maderense and P. x hortorum were used for transcriptome sequencing. Five assemblers (MIRA, Newbler, SOAPdenovo, SOAPdenovo-trans [SOAPtrans], Trinity) and two next-generation technologies (454 and Illumina) were compared to determine the optimal transcriptome sequencing approach. Trinity provided the highest quality assembly of Illumina data with the deepest transcriptome coverage. An analysis to determine the amount of sequencing needed for de novo assembly revealed diminishing returns of coverage and quality with data sets larger than sixty million Illumina paired end reads for both species. The G. maderense and P. x hortorum transcriptomes contained fewer transcripts encoding the PLS subclass of PPR proteins relative to other angiosperms, consistent with reduced mitochondrial RNA editing activity in Geraniaceae. In addition, transcripts for all six plastid targeted sigma factors were identified in both transcriptomes, suggesting that one of the highly divergent rpoA-like ORFs in the P. x hortorum plastid genome is functional. The findings support the use of the Illumina platform and assemblers optimized for transcriptome assembly, such as Trinity or SOAPtrans, to generate high-quality de novo transcriptomes with broad coverage. In addition, results indicated no major improvements in breadth of coverage with data sets larger than six billion nucleotides or when sampling RNA from four tissue types rather than from a single tissue. Finally, this work demonstrates the power of cross-compartmental genomic analyses to deepen our understanding of the correlated evolution of the nuclear, plastid, and mitochondrial genomes in plants.
Comparative analyses of two Geraniaceae transcriptomes using next-generation sequencing
2013-01-01
Background Organelle genomes of Geraniaceae exhibit several unusual evolutionary phenomena compared to other angiosperm families including accelerated nucleotide substitution rates, widespread gene loss, reduced RNA editing, and extensive genomic rearrangements. Since most organelle-encoded proteins function in multi-subunit complexes that also contain nuclear-encoded proteins, it is likely that the atypical organellar phenomena affect the evolution of nuclear genes encoding organellar proteins. To begin to unravel the complex co-evolutionary interplay between organellar and nuclear genomes in this family, we sequenced nuclear transcriptomes of two species, Geranium maderense and Pelargonium x hortorum. Results Normalized cDNA libraries of G. maderense and P. x hortorum were used for transcriptome sequencing. Five assemblers (MIRA, Newbler, SOAPdenovo, SOAPdenovo-trans [SOAPtrans], Trinity) and two next-generation technologies (454 and Illumina) were compared to determine the optimal transcriptome sequencing approach. Trinity provided the highest quality assembly of Illumina data with the deepest transcriptome coverage. An analysis to determine the amount of sequencing needed for de novo assembly revealed diminishing returns of coverage and quality with data sets larger than sixty million Illumina paired end reads for both species. The G. maderense and P. x hortorum transcriptomes contained fewer transcripts encoding the PLS subclass of PPR proteins relative to other angiosperms, consistent with reduced mitochondrial RNA editing activity in Geraniaceae. In addition, transcripts for all six plastid targeted sigma factors were identified in both transcriptomes, suggesting that one of the highly divergent rpoA-like ORFs in the P. x hortorum plastid genome is functional. Conclusions The findings support the use of the Illumina platform and assemblers optimized for transcriptome assembly, such as Trinity or SOAPtrans, to generate high-quality de novo transcriptomes with broad coverage. In addition, results indicated no major improvements in breadth of coverage with data sets larger than six billion nucleotides or when sampling RNA from four tissue types rather than from a single tissue. Finally, this work demonstrates the power of cross-compartmental genomic analyses to deepen our understanding of the correlated evolution of the nuclear, plastid, and mitochondrial genomes in plants. PMID:24373163
Lee, Bumshik; Kim, Munchurl
2016-08-01
In this paper, a low complexity coding unit (CU)-level rate and distortion estimation scheme is proposed for High Efficiency Video Coding (HEVC) hardware-friendly implementation where a Walsh-Hadamard transform (WHT)-based low-complexity integer discrete cosine transform (DCT) is employed for distortion estimation. Since HEVC adopts quadtree structures of coding blocks with hierarchical coding depths, it becomes more difficult to estimate accurate rate and distortion values without actually performing transform, quantization, inverse transform, de-quantization, and entropy coding. Furthermore, DCT for rate-distortion optimization (RDO) is computationally high, because it requires a number of multiplication and addition operations for various transform block sizes of 4-, 8-, 16-, and 32-orders and requires recursive computations to decide the optimal depths of CU or transform unit. Therefore, full RDO-based encoding is highly complex, especially for low-power implementation of HEVC encoders. In this paper, a rate and distortion estimation scheme is proposed in CU levels based on a low-complexity integer DCT that can be computed in terms of WHT whose coefficients are produced in prediction stages. For rate and distortion estimation in CU levels, two orthogonal matrices of 4×4 and 8×8 , which are applied to WHT that are newly designed in a butterfly structure only with addition and shift operations. By applying the integer DCT based on the WHT and newly designed transforms in each CU block, the texture rate can precisely be estimated after quantization using the number of non-zero quantized coefficients and the distortion can also be precisely estimated in transform domain without de-quantization and inverse transform required. In addition, a non-texture rate estimation is proposed by using a pseudoentropy code to obtain accurate total rate estimates. The proposed rate and the distortion estimation scheme can effectively be used for HW-friendly implementation of HEVC encoders with 9.8% loss over HEVC full RDO, which much less than 20.3% and 30.2% loss of a conventional approach and Hadamard-only scheme, respectively.
Profiling charge complementarity and selectivity for binding at the protein surface.
Sulea, Traian; Purisima, Enrico O
2003-05-01
A novel analysis and representation of the protein surface in terms of electrostatic binding complementarity and selectivity is presented. The charge optimization methodology is applied in a probe-based approach that simulates the binding process to the target protein. The molecular surface is color coded according to calculated optimal charge or according to charge selectivity, i.e., the binding cost of deviating from the optimal charge. The optimal charge profile depends on both the protein shape and charge distribution whereas the charge selectivity profile depends only on protein shape. High selectivity is concentrated in well-shaped concave pockets, whereas solvent-exposed convex regions are not charge selective. This suggests the synergy of charge and shape selectivity hot spots toward molecular selection and recognition, as well as the asymmetry of charge selectivity at the binding interface of biomolecular systems. The charge complementarity and selectivity profiles map relevant electrostatic properties in a readily interpretable way and encode information that is quite different from that visualized in the standard electrostatic potential map of unbound proteins.
NASA Astrophysics Data System (ADS)
He, Fei; Han, Ye; Wang, Han; Ji, Jinchao; Liu, Yuanning; Ma, Zhiqiang
2017-03-01
Gabor filters are widely utilized to detect iris texture information in several state-of-the-art iris recognition systems. However, the proper Gabor kernels and the generative pattern of iris Gabor features need to be predetermined in application. The traditional empirical Gabor filters and shallow iris encoding ways are incapable of dealing with such complex variations in iris imaging including illumination, aging, deformation, and device variations. Thereby, an adaptive Gabor filter selection strategy and deep learning architecture are presented. We first employ particle swarm optimization approach and its binary version to define a set of data-driven Gabor kernels for fitting the most informative filtering bands, and then capture complex pattern from the optimal Gabor filtered coefficients by a trained deep belief network. A succession of comparative experiments validate that our optimal Gabor filters may produce more distinctive Gabor coefficients and our iris deep representations be more robust and stable than traditional iris Gabor codes. Furthermore, the depth and scales of the deep learning architecture are also discussed.
Super-linear Precision in Simple Neural Population Codes
NASA Astrophysics Data System (ADS)
Schwab, David; Fiete, Ila
2015-03-01
A widely used tool for quantifying the precision with which a population of noisy sensory neurons encodes the value of an external stimulus is the Fisher Information (FI). Maximizing the FI is also a commonly used objective for constructing optimal neural codes. The primary utility and importance of the FI arises because it gives, through the Cramer-Rao bound, the smallest mean-squared error achievable by any unbiased stimulus estimator. However, it is well-known that when neural firing is sparse, optimizing the FI can result in codes that perform very poorly when considering the resulting mean-squared error, a measure with direct biological relevance. Here we construct optimal population codes by minimizing mean-squared error directly and study the scaling properties of the resulting network, focusing on the optimal tuning curve width. We then extend our results to continuous attractor networks that maintain short-term memory of external stimuli in their dynamics. Here we find similar scaling properties in the structure of the interactions that minimize diffusive information loss.
Active control of the spatial MRI phase distribution with optimal control theory
NASA Astrophysics Data System (ADS)
Lefebvre, Pauline M.; Van Reeth, Eric; Ratiney, Hélène; Beuf, Olivier; Brusseau, Elisabeth; Lambert, Simon A.; Glaser, Steffen J.; Sugny, Dominique; Grenier, Denis; Tse Ve Koon, Kevin
2017-08-01
This paper investigates the use of Optimal Control (OC) theory to design Radio-Frequency (RF) pulses that actively control the spatial distribution of the MRI magnetization phase. The RF pulses are generated through the application of the Pontryagin Maximum Principle and optimized so that the resulting transverse magnetization reproduces various non-trivial and spatial phase patterns. Two different phase patterns are defined and the resulting optimal pulses are tested both numerically with the ODIN MRI simulator and experimentally with an agar gel phantom on a 4.7 T small-animal MR scanner. Phase images obtained in simulations and experiments are both consistent with the defined phase patterns. A practical application of phase control with OC-designed pulses is also presented, with the generation of RF pulses adapted for a Magnetic Resonance Elastography experiment. This study demonstrates the possibility to use OC-designed RF pulses to encode information in the magnetization phase and could have applications in MRI sequences using phase images.
Reducing acquisition times in multidimensional NMR with a time-optimized Fourier encoding algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Zhiyong; Department of Electronic Science, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, Xiamen University, Xiamen, Fujian 361005; Smith, Pieter E. S.
Speeding up the acquisition of multidimensional nuclear magnetic resonance (NMR) spectra is an important topic in contemporary NMR, with central roles in high-throughput investigations and analyses of marginally stable samples. A variety of fast NMR techniques have been developed, including methods based on non-uniform sampling and Hadamard encoding, that overcome the long sampling times inherent to schemes based on fast-Fourier-transform (FFT) methods. Here, we explore the potential of an alternative fast acquisition method that leverages a priori knowledge, to tailor polychromatic pulses and customized time delays for an efficient Fourier encoding of the indirect domain of an NMR experiment. Bymore » porting the encoding of the indirect-domain to the excitation process, this strategy avoids potential artifacts associated with non-uniform sampling schemes and uses a minimum number of scans equal to the number of resonances present in the indirect dimension. An added convenience is afforded by the fact that a usual 2D FFT can be used to process the generated data. Acquisitions of 2D heteronuclear correlation NMR spectra on quinine and on the anti-inflammatory drug isobutyl propionic phenolic acid illustrate the new method's performance. This method can be readily automated to deal with complex samples such as those occurring in metabolomics, in in-cell as well as in in vivo NMR applications, where speed and temporal stability are often primary concerns.« less
NASA Astrophysics Data System (ADS)
Abdellah, Skoudarli; Mokhtar, Nibouche; Amina, Serir
2015-11-01
The H.264/AVC video coding standard is used in a wide range of applications from video conferencing to high-definition television according to its high compression efficiency. This efficiency is mainly acquired from the newly allowed prediction schemes including variable block modes. However, these schemes require a high complexity to select the optimal mode. Consequently, complexity reduction in the H.264/AVC encoder has recently become a very challenging task in the video compression domain, especially when implementing the encoder in real-time applications. Fast mode decision algorithms play an important role in reducing the overall complexity of the encoder. In this paper, we propose an adaptive fast intermode algorithm based on motion activity, temporal stationarity, and spatial homogeneity. This algorithm predicts the motion activity of the current macroblock from its neighboring blocks and identifies temporal stationary regions and spatially homogeneous regions using adaptive threshold values based on content video features. Extensive experimental work has been done in high profile, and results show that the proposed source-coding algorithm effectively reduces the computational complexity by 53.18% on average compared with the reference software encoder, while maintaining the high-coding efficiency of H.264/AVC by incurring only 0.097 dB in total peak signal-to-noise ratio and 0.228% increment on the total bit rate.
Decoding sound level in the marmoset primary auditory cortex.
Sun, Wensheng; Marongelli, Ellisha N; Watkins, Paul V; Barbour, Dennis L
2017-10-01
Neurons that respond favorably to a particular sound level have been observed throughout the central auditory system, becoming steadily more common at higher processing areas. One theory about the role of these level-tuned or nonmonotonic neurons is the level-invariant encoding of sounds. To investigate this theory, we simulated various subpopulations of neurons by drawing from real primary auditory cortex (A1) neuron responses and surveyed their performance in forming different sound level representations. Pure nonmonotonic subpopulations did not provide the best level-invariant decoding; instead, mixtures of monotonic and nonmonotonic neurons provided the most accurate decoding. For level-fidelity decoding, the inclusion of nonmonotonic neurons slightly improved or did not change decoding accuracy until they constituted a high proportion. These results indicate that nonmonotonic neurons fill an encoding role complementary to, rather than alternate to, monotonic neurons. NEW & NOTEWORTHY Neurons with nonmonotonic rate-level functions are unique to the central auditory system. These level-tuned neurons have been proposed to account for invariant sound perception across sound levels. Through systematic simulations based on real neuron responses, this study shows that neuron populations perform sound encoding optimally when containing both monotonic and nonmonotonic neurons. The results indicate that instead of working independently, nonmonotonic neurons complement the function of monotonic neurons in different sound-encoding contexts. Copyright © 2017 the American Physiological Society.
An analytical SMASH procedure (ASP) for sensitivity-encoded MRI.
Lee, R F; Westgate, C R; Weiss, R G; Bottomley, P A
2000-05-01
The simultaneous acquisition of spatial harmonics (SMASH) method of imaging with detector arrays can reduce the number of phase-encoding steps, and MRI scan time several-fold. The original approach utilized numerical gradient-descent fitting with the coil sensitivity profiles to create a set of composite spatial harmonics to replace the phase-encoding steps. Here, an analytical approach for generating the harmonics is presented. A transform is derived to project the harmonics onto a set of sensitivity profiles. A sequence of Fourier, Hilbert, and inverse Fourier transform is then applied to analytically eliminate spatially dependent phase errors from the different coils while fully preserving the spatial-encoding. By combining the transform and phase correction, the original numerical image reconstruction method can be replaced by an analytical SMASH procedure (ASP). The approach also allows simulation of SMASH imaging, revealing a criterion for the ratio of the detector sensitivity profile width to the detector spacing that produces optimal harmonic generation. When detector geometry is suboptimal, a group of quasi-harmonics arises, which can be corrected and restored to pure harmonics. The simulation also reveals high-order harmonic modulation effects, and a demodulation procedure is presented that enables application of ASP to a large numbers of detectors. The method is demonstrated on a phantom and humans using a standard 4-channel phased-array MRI system. Copyright 2000 Wiley-Liss, Inc.
Ziaei, Maryam; Peira, Nathalie; Persson, Jonas
2014-02-15
Goal-directed behavior requires that cognitive operations can be protected from emotional distraction induced by task-irrelevant emotional stimuli. The brain processes involved in attending to relevant information while filtering out irrelevant information are still largely unknown. To investigate the neural and behavioral underpinnings of attending to task-relevant emotional stimuli while ignoring irrelevant stimuli, we used fMRI to assess brain responses during attentional instructed encoding within an emotional working memory (WM) paradigm. We showed that instructed attention to emotion during WM encoding resulted in enhanced performance, by means of increased memory performance and reduced reaction time, compared to passive viewing. A similar performance benefit was also demonstrated for recognition memory performance, although for positive pictures only. Functional MRI data revealed a network of regions involved in directed attention to emotional information for both positive and negative pictures that included medial and lateral prefrontal cortices, fusiform gyrus, insula, the parahippocampal gyrus, and the amygdala. Moreover, we demonstrate that regions in the striatum, and regions associated with the default-mode network were differentially activated for emotional distraction compared to neutral distraction. Activation in a sub-set of these regions was related to individual differences in WM and recognition memory performance, thus likely contributing to performing the task at an optimal level. The present results provide initial insights into the behavioral and neural consequences of instructed attention and emotional distraction during WM encoding. © 2013.
Improved Speech Coding Based on Open-Loop Parameter Estimation
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Chen, Ya-Chin; Longman, Richard W.
2000-01-01
A nonlinear optimization algorithm for linear predictive speech coding was developed early that not only optimizes the linear model coefficients for the open loop predictor, but does the optimization including the effects of quantization of the transmitted residual. It also simultaneously optimizes the quantization levels used for each speech segment. In this paper, we present an improved method for initialization of this nonlinear algorithm, and demonstrate substantial improvements in performance. In addition, the new procedure produces monotonically improving speech quality with increasing numbers of bits used in the transmitted error residual. Examples of speech encoding and decoding are given for 8 speech segments and signal to noise levels as high as 47 dB are produced. As in typical linear predictive coding, the optimization is done on the open loop speech analysis model. Here we demonstrate that minimizing the error of the closed loop speech reconstruction, instead of the simpler open loop optimization, is likely to produce negligible improvement in speech quality. The examples suggest that the algorithm here is close to giving the best performance obtainable from a linear model, for the chosen order with the chosen number of bits for the codebook.
Grigorov, Boyan; Rabilloud, Jessica; Lawrence, Philip; Gerlier, Denis
2011-01-01
Background Measles virus (MV) is a member of the Paramyxoviridae family and an important human pathogen causing strong immunosuppression in affected individuals and a considerable number of deaths worldwide. Currently, measles is a re-emerging disease in developed countries. MV is usually quantified in infectious units as determined by limiting dilution and counting of plaque forming unit either directly (PFU method) or indirectly from random distribution in microwells (TCID50 method). Both methods are time-consuming (up to several days), cumbersome and, in the case of the PFU assay, possibly operator dependent. Methods/Findings A rapid, optimized, accurate, and reliable technique for titration of measles virus was developed based on the detection of virus infected cells by flow cytometry, single round of infection and titer calculation according to the Poisson's law. The kinetics follow up of the number of infected cells after infection with serial dilutions of a virus allowed estimation of the duration of the replication cycle, and consequently, the optimal infection time. The assay was set up to quantify measles virus, vesicular stomatitis virus (VSV), and human immunodeficiency virus type 1 (HIV-1) using antibody labeling of viral glycoprotein, virus encoded fluorescent reporter protein and an inducible fluorescent-reporter cell line, respectively. Conclusion Overall, performing the assay takes only 24–30 hours for MV strains, 12 hours for VSV, and 52 hours for HIV-1. The step-by-step procedure we have set up can be, in principle, applicable to accurately quantify any virus including lentiviral vectors, provided that a virus encoded gene product can be detected by flow cytometry. PMID:21915289
Heinz, M G; Colburn, H S; Carney, L H
2001-10-01
The perceptual significance of the cochlear amplifier was evaluated by predicting level-discrimination performance based on stochastic auditory-nerve (AN) activity. Performance was calculated for three models of processing: the optimal all-information processor (based on discharge times), the optimal rate-place processor (based on discharge counts), and a monaural coincidence-based processor that uses a non-optimal combination of rate and temporal information. An analytical AN model included compressive magnitude and level-dependent-phase responses associated with the cochlear amplifier, and high-, medium-, and low-spontaneous-rate (SR) fibers with characteristic frequencies (CFs) spanning the AN population. The relative contributions of nonlinear magnitude and nonlinear phase responses to level encoding were compared by using four versions of the model, which included and excluded the nonlinear gain and phase responses in all possible combinations. Nonlinear basilar-membrane (BM) phase responses are robustly encoded in near-CF AN fibers at low frequencies. Strongly compressive BM responses at high frequencies near CF interact with the high thresholds of low-SR AN fibers to produce large dynamic ranges. Coincidence performance based on a narrow range of AN CFs was robust across a wide dynamic range at both low and high frequencies, and matched human performance levels. Coincidence performance based on all CFs demonstrated the "near-miss" to Weber's law at low frequencies and the high-frequency "mid-level bump." Monaural coincidence detection is a physiologically realistic mechanism that is extremely general in that it can utilize AN information (average-rate, synchrony, and nonlinear-phase cues) from all SR groups.
Time-optimized laser micro machining by using a new high dynamic and high precision galvo scanner
NASA Astrophysics Data System (ADS)
Jaeggi, Beat; Neuenschwander, Beat; Zimmermann, Markus; Zecherle, Markus; Boeckler, Ernst W.
2016-03-01
High accuracy, quality and throughput are key factors in laser micro machining. To obtain these goals the ablation process, the machining strategy and the scanning device have to be optimized. The precision is influenced by the accuracy of the galvo scanner and can further be enhanced by synchronizing the movement of the mirrors with the laser pulse train. To maintain a high machining quality i.e. minimum surface roughness, the pulse-to-pulse distance has also to be optimized. Highest ablation efficiency is obtained by choosing the proper laser peak fluence together with highest specific removal rate. The throughput can now be enhanced by simultaneously increasing the average power, the repetition rate as well as the scanning speed to preserve the fluence and the pulse-to-pulse distance. Therefore a high scanning speed is of essential importance. To guarantee the required excellent accuracy even at high scanning speeds a new interferometry based encoder technology was used, that provides a high quality signal for closed-loop control of the galvo scanner position. Low inertia encoder design enables a very dynamic scanner system, which can be driven to very high line speeds by a specially adapted control solution. We will present results with marking speeds up to 25 m/s using a f = 100 mm objective obtained with a new scanning system and scanner tuning maintaining a precision of about 5 μm. Further it will be shown that, especially for short line lengths, the machining time can be minimized by choosing the proper speed which has not to be the maximum one.
Spatiotemporal models for the simulation of infrared backgrounds
NASA Astrophysics Data System (ADS)
Wilkes, Don M.; Cadzow, James A.; Peters, R. Alan, II; Li, Xingkang
1992-09-01
It is highly desirable for designers of automatic target recognizers (ATRs) to be able to test their algorithms on targets superimposed on a wide variety of background imagery. Background imagery in the infrared spectrum is expensive to gather from real sources, consequently, there is a need for accurate models for producing synthetic IR background imagery. We have developed a model for such imagery that will do the following: Given a real, infrared background image, generate another image, distinctly different from the one given, that has the same general visual characteristics as well as the first and second-order statistics of the original image. The proposed model consists of a finite impulse response (FIR) kernel convolved with an excitation function, and histogram modification applied to the final solution. A procedure for deriving the FIR kernel using a signal enhancement algorithm has been developed, and the histogram modification step is a simple memoryless nonlinear mapping that imposes the first order statistics of the original image onto the synthetic one, thus the overall model is a linear system cascaded with a memoryless nonlinearity. It has been found that the excitation function relates to the placement of features in the image, the FIR kernel controls the sharpness of the edges and the global spectrum of the image, and the histogram controls the basic coloration of the image. A drawback to this method of simulating IR backgrounds is that a database of actual background images must be collected in order to produce accurate FIR and histogram models. If this database must include images of all types of backgrounds obtained at all times of the day and all times of the year, the size of the database would be prohibitive. In this paper we propose improvements to the model described above that enable time-dependent modeling of the IR background. This approach can greatly reduce the number of actual IR backgrounds that are required to produce a sufficiently accurate mathematical model for synthesizing a similar IR background for different times of the day. Original and synthetic IR backgrounds will be presented. Previous research in simulating IR backgrounds was performed by Strenzwilk, et al., Botkin, et al., and Rapp. The most recent work of Strenzwilk, et al. was based on the use of one-dimensional ARMA models for synthesizing the images. Their results were able to retain the global statistical and spectral behavior of the original image, but the synthetic image was not visually very similar to the original. The research presented in this paper is the result of an attempt to improve upon their results, and represents a significant improvement in quality over previously obtained results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ide, Toshiki; Hofmann, Holger F.; JST-CREST, Graduate School of Advanced Sciences of Matter, Hiroshima University, Kagamiyama 1-3-1, Higashi Hiroshima 739-8530
The information encoded in the polarization of a single photon can be transferred to a remote location by two-channel continuous-variable quantum teleportation. However, the finite entanglement used in the teleportation causes random changes in photon number. If more than one photon appears in the output, the continuous-variable teleportation accidentally produces clones of the original input photon. In this paper, we derive the polarization statistics of the N-photon output components and show that they can be decomposed into an optimal cloning term and completely unpolarized noise. We find that the accidental cloning of the input photon is nearly optimal at experimentallymore » feasible squeezing levels, indicating that the loss of polarization information is partially compensated by the availability of clones.« less
Multiobjective synchronization of coupled systems
NASA Astrophysics Data System (ADS)
Tang, Yang; Wang, Zidong; Wong, W. K.; Kurths, Jürgen; Fang, Jian-an
2011-06-01
In this paper, multiobjective synchronization of chaotic systems is investigated by especially simultaneously minimizing optimization of control cost and convergence speed. The coupling form and coupling strength are optimized by an improved multiobjective evolutionary approach that includes a hybrid chromosome representation. The hybrid encoding scheme combines binary representation with real number representation. The constraints on the coupling form are also considered by converting the multiobjective synchronization into a multiobjective constraint problem. In addition, the performances of the adaptive learning method and non-dominated sorting genetic algorithm-II as well as the effectiveness and contributions of the proposed approach are analyzed and validated through the Rössler system in a chaotic or hyperchaotic regime and delayed chaotic neural networks.
Enhancement of heterogeneous alkaline xylanase production in Pichia pastoris GS115
NASA Astrophysics Data System (ADS)
Zheng, Wei
2017-08-01
A series of strategies were applied to improve expression level of the recombinant alkaline xylanase from Bacillus pumilus G1-3 in Pichia pastoris GS115. Codon optimization of xylanase gene xynG1-3 from B. pumilus G1-3 were carried out for its heterogeneous expression in P. pastoris. The activity of xylanase encoded by optimized gene (xynG1-3-opt) was up to 33641 U/mL, which was 37% higher than that by wild-type (xynG1-3) gene. The results will greatly contribute to increasing the production of recombinant proteins in P. pastoris and improving the industrial production of the alkaline xylanase.
[Lead compound optimization strategy(5) – reducing the hERG cardiac toxicity in drug development].
Zhou, Sheng-bin; Wang, Jiang; Liu, Hong
2016-10-01
The potassium channel encoded by the human ether-a-go-go related gene(hERG) plays a very important role in the physiological and pathological processes in human. hERG potassium channel determines the outward currents which facilitate the repolarization of the myocardial cells. Some drugs were withdrawn from the market for the serious side effect of long QT interval and arrhythmia due to blockade of hERG channel. The strategies for lead compound optimization are to reduce inhibitory activity of hERG potassium channel and decrease cardiac toxicity. These methods include reduction of lipophilicity and basicity of amines, introduction of hydroxyl and acidic groups, and restricting conformation.
Scale Invariance in Lateral Head Scans During Spatial Exploration.
Yadav, Chetan K; Doreswamy, Yoganarasimha
2017-04-14
Universality connects various natural phenomena through physical principles governing their dynamics, and has provided broadly accepted answers to many complex questions, including information processing in neuronal systems. However, its significance in behavioral systems is still elusive. Lateral head scanning (LHS) behavior in rodents might contribute to spatial navigation by actively managing (optimizing) the available sensory information. Our findings of scale invariant distributions in LHS lifetimes, interevent intervals and event magnitudes, provide evidence for the first time that the optimization takes place at a critical point in LHS dynamics. We propose that the LHS behavior is responsible for preprocessing of the spatial information content, critical for subsequent foolproof encoding by the respective downstream neural networks.
Scale Invariance in Lateral Head Scans During Spatial Exploration
NASA Astrophysics Data System (ADS)
Yadav, Chetan K.; Doreswamy, Yoganarasimha
2017-04-01
Universality connects various natural phenomena through physical principles governing their dynamics, and has provided broadly accepted answers to many complex questions, including information processing in neuronal systems. However, its significance in behavioral systems is still elusive. Lateral head scanning (LHS) behavior in rodents might contribute to spatial navigation by actively managing (optimizing) the available sensory information. Our findings of scale invariant distributions in LHS lifetimes, interevent intervals and event magnitudes, provide evidence for the first time that the optimization takes place at a critical point in LHS dynamics. We propose that the LHS behavior is responsible for preprocessing of the spatial information content, critical for subsequent foolproof encoding by the respective downstream neural networks.
Sindarovska, Y R; Gerasymenko, I M; Sheludko, Y V; Olevinskaya, Z M; Spivak, N Y; Kuchuk, N V
2010-01-01
Human interferon alpha2b gene was transiently expressed in Nicotiana excelsior plants. Fusion with N. plumbaginifolia calreticulin signal peptide for improved apoplast targeting and carrying out the expression under optimized conditions resulted in maximal interferon activity of 3.2 x 10(3) IU/g fresh weight (FW) with an average of 2.1 +/- 0.8 x 10(3) IU/g FW. It proves that N. excelsior is a suitable host for Agrobacterium-mediated transient expression of genes encoding physiologically active human proteins. The transient expression conditions optimized for GFP marker protein were confirmed to be preferable for hIFN alpha2b.
Kuipers, Grietje; Karyolaimos, Alexandros; Zhang, Zhe; Ismail, Nurzian; Trinco, Gianluca; Vikström, David; Slotboom, Dirk Jan; de Gier, Jan-Willem
2017-12-16
To optimize the production of membrane and secretory proteins in Escherichia coli, it is critical to harmonize the expression rates of the genes encoding these proteins with the capacity of their biogenesis machineries. Therefore, we engineered the Lemo21(DE3) strain, which is derived from the T7 RNA polymerase-based BL21(DE3) protein production strain. In Lemo21(DE3), the T7 RNA polymerase activity can be modulated by the controlled co-production of its natural inhibitor T7 lysozyme. This setup enables to precisely tune target gene expression rates in Lemo21(DE3). The t7lys gene is expressed from the pLemo plasmid using the titratable rhamnose promoter. A disadvantage of the Lemo21(DE3) setup is that the system is based on two plasmids, a T7 expression vector and pLemo. The aim of this study was to simplify the Lemo21(DE3) setup by incorporating the key elements of pLemo in a standard T7-based expression vector. By incorporating the gene encoding the T7 lysozyme under control of the rhamnose promoter in a standard T7-based expression vector, pReX was created (ReX stands for Regulated gene eXpression). For two model membrane proteins and a model secretory protein we show that the optimized production yields obtained with the pReX expression vector in BL21(DE3) are similar to the ones obtained with Lemo21(DE3) using a standard T7 expression vector. For another secretory protein, a c-type cytochrome, we show that pReX, in contrast to Lemo21(DE3), enables the use of a helper plasmid that is required for the maturation and hence the production of this heme c protein. Here, we created pReX, a T7-based expression vector that contains the gene encoding the T7 lysozyme under control of the rhamnose promoter. pReX enables regulated T7-based target gene expression using only one plasmid. We show that with pReX the production of membrane and secretory proteins can be readily optimized. Importantly, pReX facilitates the use of helper plasmids. Furthermore, the use of pReX is not restricted to BL21(DE3), but it can in principle be used in any T7 RNAP-based strain. Thus, pReX is a versatile alternative to Lemo21(DE3).
USDA-ARS?s Scientific Manuscript database
The Autographa californica multiple nucleopolyhedrovirus (AcMNPV) odv-e56 gene encodes an occlusion-derived virus (ODV)-specific envelope protein, ODV-E56. To determine the role of ODV-E56 in oral infectivity, we produced recombinant EGFP-expressing AcMNPV clones (Ac69GFP-e56lacZ and AcIEGFP-e56lac...
Compressive Information Extraction: A Dynamical Systems Approach
2016-01-24
sparsely encoded in very large data streams. (a) Target tracking in an urban canyon; (b) and (c) sample frames showing contextually abnormal events: onset...extraction to identify contextually abnormal se- quences (see section 2.2.3). Formally, the problem of interest can be stated as establishing whether a noisy...relaxations with optimality guarantees can be obtained using tools from semi-algebraic geometry. 2.2 Application: Detecting Contextually Abnormal Events
Solving Connected Subgraph Problems in Wildlife Conservation
NASA Astrophysics Data System (ADS)
Dilkina, Bistra; Gomes, Carla P.
We investigate mathematical formulations and solution techniques for a variant of the Connected Subgraph Problem. Given a connected graph with costs and profits associated with the nodes, the goal is to find a connected subgraph that contains a subset of distinguished vertices. In this work we focus on the budget-constrained version, where we maximize the total profit of the nodes in the subgraph subject to a budget constraint on the total cost. We propose several mixed-integer formulations for enforcing the subgraph connectivity requirement, which plays a key role in the combinatorial structure of the problem. We show that a new formulation based on subtour elimination constraints is more effective at capturing the combinatorial structure of the problem, providing significant advantages over the previously considered encoding which was based on a single commodity flow. We test our formulations on synthetic instances as well as on real-world instances of an important problem in environmental conservation concerning the design of wildlife corridors. Our encoding results in a much tighter LP relaxation, and more importantly, it results in finding better integer feasible solutions as well as much better upper bounds on the objective (often proving optimality or within less than 1% of optimality), both when considering the synthetic instances as well as the real-world wildlife corridor instances.
Zheng, Panpan; Liu, Jinquan; Li, Zhu; Liu, Huafeng
2017-01-01
Encoder-like micro area-changed capacitive transducers are advantageous in terms of their better linearity and larger dynamic range compared to gap-changed capacitive transducers. Such transducers have been widely applied in rectilinear and rotational position sensors, lab-on-a-chip applications and bio-sensors. However, a complete model accounting for both the parasitic capacitance and fringe effect in area-changed capacitive transducers has not yet been developed. This paper presents a complete model for this type of transducer applied to a high-resolution micro accelerometer that was verified by both simulations and experiments. A novel optimization method involving the insertion of photosensitive polyimide was used to reduce the parasitic capacitance, and the capacitor spacing was decreased to overcome the fringe effect. The sensitivity of the optimized transducer was approximately 46 pF/mm, which was nearly 40 times higher than that of our previous transducer. The displacement detection resolution was measured as 50 pm/√Hz at 0.1 Hz using a precise capacitance detection circuit. Then, the transducer was applied to a sandwich in-plane micro accelerometer, and the measured level of the accelerometer was approximately 30 ng/√Hz at 1Hz. The earthquake that occurred in Taiwan was also detected during a continuous gravity measurement. PMID:28930176
Echo-level compensation and delay tuning in the auditory cortex of the mustached bat.
Macías, Silvio; Mora, Emanuel C; Hechavarría, Julio C; Kössl, Manfred
2016-06-01
During echolocation, bats continuously perform audio-motor adjustments to optimize detection efficiency. It has been demonstrated that bats adjust the amplitude of their biosonar vocalizations (known as 'pulses') to stabilize the amplitude of the returning echo. Here, we investigated this echo-level compensation behaviour by swinging mustached bats on a pendulum towards a reflective surface. In such a situation, the bats lower the amplitude of their emitted pulses to maintain the amplitude of incoming echoes at a constant level as they approach a target. We report that cortical auditory neurons that encode target distance have receptive fields that are optimized for dealing with echo-level compensation. In most cortical delay-tuned neurons, the echo amplitude eliciting the maximum response matches the echo amplitudes measured from the bats' biosonar vocalizations while they are swung in a pendulum. In addition, neurons tuned to short target distances are maximally responsive to low pulse amplitudes while neurons tuned to long target distances respond maximally to high pulse amplitudes. Our results suggest that bats dynamically adjust biosonar pulse amplitude to match the encoding of target range and to keep the amplitude of the returning echo within the bounds of the cortical map of echo delays. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Wang, Huan; Jing, Miao; Li, Yulong
2018-06-01
Measuring the precise dynamics of specific neurotransmitters and neuromodulators in the brain is essential for understanding how information is transmitted and processed. Thanks to the development and optimization of various genetically encoded sensors, we are approaching the stage in which a few key neurotransmitters/neuromodulators can be imaged with high cell specificity and good signal-to-noise ratio. Here, we summarize recent progress regarding these sensors, focusing on their design principles, properties, potential applications, and current limitations. We also highlight the G protein-coupled receptor (GPCR) scaffold as a promising platform that may enable the scalable development of the next generation of sensors, enabling the rapid, sensitive, and specific detection of a large repertoire of neurotransmitters/neuromodulators in vivo at cellular or even subcellular resolution. Copyright © 2018 Elsevier Ltd. All rights reserved.
Efficient quantum transmission in multiple-source networks.
Luo, Ming-Xing; Xu, Gang; Chen, Xiu-Bo; Yang, Yi-Xian; Wang, Xiaojun
2014-04-02
A difficult problem in quantum network communications is how to efficiently transmit quantum information over large-scale networks with common channels. We propose a solution by developing a quantum encoding approach. Different quantum states are encoded into a coherent superposition state using quantum linear optics. The transmission congestion in the common channel may be avoided by transmitting the superposition state. For further decoding and continued transmission, special phase transformations are applied to incoming quantum states using phase shifters such that decoders can distinguish outgoing quantum states. These phase shifters may be precisely controlled using classical chaos synchronization via additional classical channels. Based on this design and the reduction of multiple-source network under the assumption of restricted maximum-flow, the optimal scheme is proposed for specially quantized multiple-source network. In comparison with previous schemes, our scheme can greatly increase the transmission efficiency.
Designing and encoding models for synthetic biology.
Endler, Lukas; Rodriguez, Nicolas; Juty, Nick; Chelliah, Vijayalakshmi; Laibe, Camille; Li, Chen; Le Novère, Nicolas
2009-08-06
A key component of any synthetic biology effort is the use of quantitative models. These models and their corresponding simulations allow optimization of a system design, as well as guiding their subsequent analysis. Once a domain mostly reserved for experts, dynamical modelling of gene regulatory and reaction networks has been an area of growth over the last decade. There has been a concomitant increase in the number of software tools and standards, thereby facilitating model exchange and reuse. We give here an overview of the model creation and analysis processes as well as some software tools in common use. Using markup language to encode the model and associated annotation, we describe the mining of components, their integration in relational models, formularization and parametrization. Evaluation of simulation results and validation of the model close the systems biology 'loop'.
An opportunistic theory of cellular and systems consolidation
Mednick, Sara C.; Cai, Denise J.; Shuman, Tristan; Anagnostaras, Stephan; Wixted, John
2011-01-01
Memories are often classified as hippocampus-dependent or –independent, and sleep has been found to facilitate both, but in different ways. In this Opinion article, we explore the optimal neural state for cellular and systems consolidation of hippocampus-dependent memories that benefit from sleep. We suggest that these two kinds of consolidation, which are ordinarily treated separately, may overlap in time and jointly benefit from a period of reduced interference (during which no new memories are formed). Conditions that result in reduced interference include slow wave sleep (SWS), NMDA receptor antagonists, benzodiazepines, alcohol, and acetylcholine antagonists. We hypothesize that the consolidation of hippocampal-dependent memories may not depend on SWS per se. Instead, the brain opportunistically consolidates previously encoded memories whenever the hippocampus is not otherwise occupied by the task of encoding new memories. PMID:21742389
Gao, Zhaowei; Li, Zhuofu; Zhang, Yuhong; Huang, Huoqing; Li, Mu; Zhou, Liwei; Tang, Yunming; Yao, Bin; Zhang, Wei
2012-03-01
The glucose oxidase (GOD) gene from Penicillium notatum was expressed in Pichia pastoris. The 1,815 bp gene, god-w, encodes 604 amino acids. Recombinant GOD-w had optimal activity at 35-40°C and pH 6.2 and was stable, from pH 3 to 7 maintaining >75% maximum activity after incubation at 50°C for 1 h. GOD-w worked as well as commercial GODs to improve bread making. To achieve high-level expression of recombinant GOD in P. pastoris, 272 nucleotides involving 228 residues were mutated, consistent with the codon bias of P. pastoris. The optimized recombinant GOD-m yielded 615 U ml(-1) (2.5 g protein l(-1)) in a 3 l fermentor--410% higher than GOD-w (148 U ml(-1)), and thus is a low-cost alternative for the bread baking industry.
Foight, Glenna Wink; Chen, T. Scott; Richman, Daniel; Keating, Amy E.
2017-01-01
Peptide reagents with high affinity or specificity for their target protein interaction partner are of utility for many important applications. Optimization of peptide binding by screening large libraries is a proven and powerful approach. Libraries designed to be enriched in peptide sequences that are predicted to have desired affinity or specificity characteristics are more likely to yield success than random mutagenesis. We present a library optimization method in which the choice of amino acids to encode at each peptide position can be guided by available experimental data or structure-based predictions. We discuss how to use analysis of predicted library performance to inform rounds of library design. Finally, we include protocols for more complex library design procedures that consider the chemical diversity of the amino acids at each peptide position and optimize a library score based on a user-specified input model. PMID:28236241
Necessary conditions for the optimality of variable rate residual vector quantizers
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.
1993-01-01
Residual vector quantization (RVQ), or multistage VQ, as it is also called, has recently been shown to be a competitive technique for data compression. The competitive performance of RVQ reported in results from the joint optimization of variable rate encoding and RVQ direct-sum code books. In this paper, necessary conditions for the optimality of variable rate RVQ's are derived, and an iterative descent algorithm based on a Lagrangian formulation is introduced for designing RVQ's having minimum average distortion subject to an entropy constraint. Simulation results for these entropy-constrained RVQ's (EC-RVQ's) are presented for memory less Gaussian, Laplacian, and uniform sources. A Gauss-Markov source is also considered. The performance is superior to that of entropy-constrained scalar quantizers (EC-SQ's) and practical entropy-constrained vector quantizers (EC-VQ's), and is competitive with that of some of the best source coding techniques that have appeared in the literature.
Foight, Glenna Wink; Chen, T Scott; Richman, Daniel; Keating, Amy E
2017-01-01
Peptide reagents with high affinity or specificity for their target protein interaction partner are of utility for many important applications. Optimization of peptide binding by screening large libraries is a proven and powerful approach. Libraries designed to be enriched in peptide sequences that are predicted to have desired affinity or specificity characteristics are more likely to yield success than random mutagenesis. We present a library optimization method in which the choice of amino acids to encode at each peptide position can be guided by available experimental data or structure-based predictions. We discuss how to use analysis of predicted library performance to inform rounds of library design. Finally, we include protocols for more complex library design procedures that consider the chemical diversity of the amino acids at each peptide position and optimize a library score based on a user-specified input model.
NASA Astrophysics Data System (ADS)
Xuan, Hejun; Wang, Yuping; Xu, Zhanqi; Hao, Shanshan; Wang, Xiaoli
2017-11-01
Virtualization technology can greatly improve the efficiency of the networks by allowing the virtual optical networks to share the resources of the physical networks. However, it will face some challenges, such as finding the efficient strategies for virtual nodes mapping, virtual links mapping and spectrum assignment. It is even more complex and challenging when the physical elastic optical networks using multi-core fibers. To tackle these challenges, we establish a constrained optimization model to determine the optimal schemes of optical network mapping, core allocation and spectrum assignment. To solve the model efficiently, tailor-made encoding scheme, crossover and mutation operators are designed. Based on these, an efficient genetic algorithm is proposed to obtain the optimal schemes of the virtual nodes mapping, virtual links mapping, core allocation. The simulation experiments are conducted on three widely used networks, and the experimental results show the effectiveness of the proposed model and algorithm.
Relabeling exchange method (REM) for learning in neural networks
NASA Astrophysics Data System (ADS)
Wu, Wen; Mammone, Richard J.
1994-02-01
The supervised training of neural networks require the use of output labels which are usually arbitrarily assigned. In this paper it is shown that there is a significant difference in the rms error of learning when `optimal' label assignment schemes are used. We have investigated two efficient random search algorithms to solve the relabeling problem: the simulated annealing and the genetic algorithm. However, we found them to be computationally expensive. Therefore we shall introduce a new heuristic algorithm called the Relabeling Exchange Method (REM) which is computationally more attractive and produces optimal performance. REM has been used to organize the optimal structure for multi-layered perceptrons and neural tree networks. The method is a general one and can be implemented as a modification to standard training algorithms. The motivation of the new relabeling strategy is based on the present interpretation of dyslexia as an encoding problem.
Bacterial synthesis of N-hydroxycinnamoyl phenethylamines and tyramines.
Sim, Geun Young; Yang, So-Mi; Kim, Bong Gyu; Ahn, Joong-Hoon
2015-10-13
Hydroxycinnamic acids (HCAs) including cinnamic acid, p-coumaric acid, caffeic acid, and ferulic acid, are C6-C3 phenolic compounds that are synthesized via the phenylpropanoid pathway. HCAs serve as precursors for the synthesis of lignins, flavonoids, anthocyanins, stilbenes and other phenolic compounds. HCAs can also be conjugated with diverse compounds including quinic acid, hydroxyl acids, and amines. Hydroxycinnamoyl (HC) amine conjugates such as N-HC tyramines and N-HC phenethylamines have been considered as potential starting materials to develop antiviral and anticancer drugs. We synthesized N-HC tyramines and N-HC phenethylamines using three different approaches in Escherichia coli. Five N-HC phenethylamines and eight N-HC tyramines were synthesized by feeding HCAs and phenethylamine or tyramine to E. coli harboring 4CL (encoding 4-coumarate CoA:ligase) and either SHT (encoding phenethylamine N-HC transferase) or THT (encoding tyramine N-HC transferase). Also, N-(p-coumaroyl) phenethylamine and N-(p-coumaroyl) tyramine were synthesized from p-coumaric acid using E. coli harboring an additional gene, PDC (encoding phenylalanine decarboxylase) or TDC (encoding tyrosine decarboxylase). Finally, we synthesized N-(p-coumaroyl) phenethylamine and N-(p-coumaroyl) tyramine from glucose by reconstructing the metabolic pathways for their synthesis in E. coli. Productivity was maximized by optimizing the cell concentration and incubation temperature. We reconstructed the metabolic pathways for synthesis of N-HC tyramines and N-HC phenethylamines by expressing several genes including 4CL, TST or SHT, PDC or TDC, and TAL (encoding tyrosine ammonia lyase) and engineering the shikimate metabolic pathway to increase endogenous tyrosine concentration in E. coli. Approximately 101.9 mg/L N-(p-coumaroyl) phenethylamine and 495.4 mg/L N-(p-coumaroyl) tyramine were synthesized from p-coumaric acid. Furthermore, 152.5 mg/L N-(p-coumaroyl) phenethylamine and 94.7 mg/L N-(p-coumaroyl) tyramine were synthesized from glucose.
Efficient encoding of motion is mediated by gap junctions in the fly visual system.
Wang, Siwei; Borst, Alexander; Zaslavsky, Noga; Tishby, Naftali; Segev, Idan
2017-12-01
Understanding the computational implications of specific synaptic connectivity patterns is a fundamental goal in neuroscience. In particular, the computational role of ubiquitous electrical synapses operating via gap junctions remains elusive. In the fly visual system, the cells in the vertical-system network, which play a key role in visual processing, primarily connect to each other via axonal gap junctions. This network therefore provides a unique opportunity to explore the functional role of gap junctions in sensory information processing. Our information theoretical analysis of a realistic VS network model shows that within 10 ms following the onset of the visual input, the presence of axonal gap junctions enables the VS system to efficiently encode the axis of rotation, θ, of the fly's ego motion. This encoding efficiency, measured in bits, is near-optimal with respect to the physical limits of performance determined by the statistical structure of the visual input itself. The VS network is known to be connected to downstream pathways via a subset of triplets of the vertical system cells; we found that because of the axonal gap junctions, the efficiency of this subpopulation in encoding θ is superior to that of the whole vertical system network and is robust to a wide range of signal to noise ratios. We further demonstrate that this efficient encoding of motion by this subpopulation is necessary for the fly's visually guided behavior, such as banked turns in evasive maneuvers. Because gap junctions are formed among the axons of the vertical system cells, they only impact the system's readout, while maintaining the dendritic input intact, suggesting that the computational principles implemented by neural circuitries may be much richer than previously appreciated based on point neuron models. Our study provides new insights as to how specific network connectivity leads to efficient encoding of sensory stimuli.
Lempereur, Laetitia; Larcombe, Stephen D; Durrani, Zeeshan; Karagenc, Tulin; Bilgic, Huseyin Bilgin; Bakirci, Serkan; Hacilarlioglu, Selin; Kinnaird, Jane; Thompson, Joanne; Weir, William; Shiels, Brian
2017-06-05
Vector-borne apicomplexan parasites are a major cause of mortality and morbidity to humans and livestock globally. The most important disease syndromes caused by these parasites are malaria, babesiosis and theileriosis. Strategies for control often target parasite stages in the mammalian host that cause disease, but this can result in reservoir infections that promote pathogen transmission and generate economic loss. Optimal control strategies should protect against clinical disease, block transmission and be applicable across related genera of parasites. We have used bioinformatics and transcriptomics to screen for transmission-blocking candidate antigens in the tick-borne apicomplexan parasite, Theileria annulata. A number of candidate antigen genes were identified which encoded amino acid domains that are conserved across vector-borne Apicomplexa (Babesia, Plasmodium and Theileria), including the Pfs48/45 6-cys domain and a novel cysteine-rich domain. Expression profiling confirmed that selected candidate genes are expressed by life cycle stages within infected ticks. Additionally, putative B cell epitopes were identified in the T. annulata gene sequences encoding the 6-cys and cysteine rich domains, in a gene encoding a putative papain-family cysteine peptidase, with similarity to the Plasmodium SERA family, and the gene encoding the T. annulata major merozoite/piroplasm surface antigen, Tams1. Candidate genes were identified that encode proteins with similarity to known transmission blocking candidates in related parasites, while one is a novel candidate conserved across vector-borne apicomplexans and has a potential role in the sexual phase of the life cycle. The results indicate that a 'One Health' approach could be utilised to develop a transmission-blocking strategy effective against vector-borne apicomplexan parasites of animals and humans.
Breaking the news on mobile TV: user requirements of a popular mobile content
NASA Astrophysics Data System (ADS)
Knoche, Hendrik O.; Sasse, M. Angela
2006-02-01
This paper presents the results from three lab-based studies that investigated different ways of delivering Mobile TV News by measuring user responses to different encoding bitrates, image resolutions and text quality. All studies were carried out with participants watching News content on mobile devices, with a total of 216 participants rating the acceptability of the viewing experience. Study 1 compared the acceptability of a 15-second video clip at different video and audio encoding bit rates on a 3G phone at a resolution of 176x144 and an iPAQ PDA (240x180). Study 2 measured the acceptability of video quality of full feature news clips of 2.5 minutes which were recorded from broadcast TV, encoded at resolutions ranging from 120x90 to 240x180, and combined with different encoding bit rates and audio qualities presented on an iPAQ. Study 3 improved the legibility of the text included in the video simulating a separate text delivery. The acceptability of News' video quality was greatly reduced at a resolution of 120x90. The legibility of text was a decisive factor in the participants' assessment of the video quality. Resolutions of 168x126 and higher were substantially more acceptable when they were accompanied by optimized high quality text compared to proportionally scaled inline text. When accompanied by high quality text TV news clips were acceptable to the vast majority of participants at resolutions as small as 168x126 for video encoding bitrates of 160kbps and higher. Service designers and operators can apply this knowledge to design a cost-effective mobile TV experience.
Pereira, Antonina; Altgassen, Mareike; Atchison, Lesley; de Mendonça, Alexandre; Ellis, Judi
2018-04-16
Prospective memory (PM), the ability to remember to perform future activities, is a fundamental requirement for independent living. PM tasks pervade our daily lives, and PM failures represent one of the most prominent memory concerns across the entire life span. This study aimed to address this issue by exploring the potential benefits of specific encoding strategies on memory for intentions across healthy adulthood and in the early stages of cognitive impairment. PM performance was explored through an experimental paradigm in 96 participants: 32 amnestic mild cognitively impaired patients aged 64-87 years (M = 6.75, SD = 5.88), 32 healthy older adults aged 62-84 years (M = 76.06, SD = 6.03), and 32 younger adults 18-22 years (M = 19.75, SD = 1.16). The potential benefit of the use of enactment (i.e., physically simulating the intended action) at encoding to support an autonomous performance despite neuronal degeneration was assessed. PM was consistently identified as a sensitive and specific indicator of cognitive impairment. Importantly, enacted encoding was consistently beneficial for PM performance of all the participants, but especially so in the case of healthy and cognitively impaired older adults. These positive results have unveiled the potential of this encoding technique to optimize attentional demands through an adaptive allocation of strategic resources across both healthy and cognitively impaired samples. Theoretical implications of this work are discussed as well as the considerable translational potential to improve social well-being. A better understanding of the strategies that can enhance PM offers the potential for cost-effective and widely applicable tools which may support independent living across the adult life span. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
A MPEG-4 encoder based on TMS320C6416
NASA Astrophysics Data System (ADS)
Li, Gui-ju; Liu, Wei-ning
2013-08-01
Engineering and products need to achieve real-time video encoding by DSP, but the high computational complexity and huge amount of data requires that system has high data throughput. In this paper, a real-time MPEG-4 video encoder is designed based on TMS320C6416 platform. The kernel is the DSP of TMS320C6416T and FPGA chip f as the organization and management of video data. In order to control the flow of input and output data. Encoded stream is output using the synchronous serial port. The system has the clock frequency of 1GHz and has up to 8000 MIPS speed processing capacity when running at full speed. Due to the low coding efficiency of MPEG-4 video encoder transferred directly to DSP platform, it is needed to improve the program structure, data structures and algorithms combined with TMS320C6416T characteristics. First: Design the image storage architecture by balancing the calculation spending, storage space cost and EDMA read time factors. Open up a more buffer in memory, each buffer cache 16 lines of video data to be encoded, reconstruction image and reference image including search range. By using the variable alignment mode of the DSP, modifying the definition of structure variables and change the look-up table which occupy larger space with a direct calculation array to save memory space. After the program structure optimization, the program code, all variables, buffering buffers and the interpolation image including the search range can be placed in memory. Then, as to the time-consuming process modules and some functions which are called many times, the corresponding modules are written in parallel assembly language of TMS320C6416T which can increase the running speed. Besides, the motion estimation algorithm is improved by using a cross-hexagon search algorithm, The search speed can be increased obviously. Finally, the execution time, signal-to-noise ratio and compression ratio of a real-time image acquisition sequence is given. The experimental results show that the designed encoder in this paper can accomplish real-time encoding of a 768× 576, 25 frames per second grayscale video. The code rate is 1.5M bits per second.
Dong, G; Vieille, C; Zeikus, J G
1997-01-01
The gene encoding the Pyrococcus furiosus hyperthermophilic amylopullulanase (APU) was cloned, sequenced, and expressed in Escherichia coli. The gene encoded a single 827-residue polypeptide with a 26-residue signal peptide. The protein sequence had very low homology (17 to 21% identity) with other APUs and enzymes of the alpha-amylase family. In particular, none of the consensus regions present in the alpha-amylase family could be identified. P. furiosus APU showed similarity to three proteins, including the P. furiosus intracellular alpha-amylase and Dictyoglomus thermophilum alpha-amylase A. The mature protein had a molecular weight of 89,000. The recombinant P. furiosus APU remained folded after denaturation at temperatures of < or = 70 degrees C and showed an apparent molecular weight of 50,000 in sodium dodecyl sulfate-polyacrylamide gel electrophoresis. Denaturating temperatures of above 100 degrees C were required for complete unfolding. The enzyme was extremely thermostable, with an optimal activity at 105 degrees C and pH 5.5. Ca2+ increased the enzyme activity, thermostability, and substrate affinity. The enzyme was highly resistant to chemical denaturing reagents, and its activity increased up to twofold in the presence of surfactants. PMID:9293009
Self-Supervised Video Hashing With Hierarchical Binary Auto-Encoder.
Song, Jingkuan; Zhang, Hanwang; Li, Xiangpeng; Gao, Lianli; Wang, Meng; Hong, Richang
2018-07-01
Existing video hash functions are built on three isolated stages: frame pooling, relaxed learning, and binarization, which have not adequately explored the temporal order of video frames in a joint binary optimization model, resulting in severe information loss. In this paper, we propose a novel unsupervised video hashing framework dubbed self-supervised video hashing (SSVH), which is able to capture the temporal nature of videos in an end-to-end learning to hash fashion. We specifically address two central problems: 1) how to design an encoder-decoder architecture to generate binary codes for videos and 2) how to equip the binary codes with the ability of accurate video retrieval. We design a hierarchical binary auto-encoder to model the temporal dependencies in videos with multiple granularities, and embed the videos into binary codes with less computations than the stacked architecture. Then, we encourage the binary codes to simultaneously reconstruct the visual content and neighborhood structure of the videos. Experiments on two real-world data sets show that our SSVH method can significantly outperform the state-of-the-art methods and achieve the current best performance on the task of unsupervised video retrieval.
Chi, Zhenming; Chi, Zhe; Zhang, Tong; Liu, Guanglei; Li, Jing; Wang, Xianghong
2009-01-01
In this review article, the extracellular enzymes production, their properties and cloning of the genes encoding the enzymes from marine yeasts are overviewed. Several yeast strains which could produce different kinds of extracellular enzymes were selected from the culture collection of marine yeasts available in this laboratory. The strains selected belong to different genera such as Yarrowia, Aureobasidium, Pichia, Metschnikowia and Cryptococcus. The extracellular enzymes include cellulase, alkaline protease, aspartic protease, amylase, inulinase, lipase and phytase, as well as killer toxin. The conditions and media for the enzyme production by the marine yeasts have been optimized and the enzymes have been purified and characterized. Some genes encoding the extracellular enzymes from the marine yeast strains have been cloned, sequenced and expressed. It was found that some properties of the enzymes from the marine yeasts are unique compared to those of the homologous enzymes from terrestrial yeasts and the genes encoding the enzymes in marine yeasts are different from those in terrestrial yeasts. Therefore, it is of very importance to further study the enzymes and their genes from the marine yeasts. This is the first review on the extracellular enzymes and their genes from the marine yeasts.
Xu, M; Li, Y; Kang, T Z; Zhang, T S; Ji, J H; Yang, S W
2016-11-14
Two orthogonal modulation optical label switching(OLS) schemes, which are based on payload of polarization multiplexing-differential quadrature phase shift keying(POLMUX-DQPSK or PDQ) modulated with identifications of duobinary (DB) label and pulse position modulation(PPM) label, are researched in high bit-rate OLS network. The BER performance of hybrid modulation with payload and label signals are discussed and evaluated in theory and simulation. The theoretical BER expressions of PDQ, PDQ-DB and PDQ-PPM are given with analysis method of hybrid modulation encoding in different the bit-rate ratios of payload and label. Theoretical derivation results are shown that the payload of hybrid modulation has a certain gain of receiver sensitivity than payload without label. The sizes of payload BER gain obtained from hybrid modulation are related to the different types of label. The simulation results are consistent with that of theoretical conclusions. The extinction ratio (ER) conflicting between hybrid encoding of intensity and phase types can be compromised and optimized in OLS system of hybrid modulation. The BER analysis method of hybrid modulation encoding in OLS system can be applied to other n-ary hybrid modulation or combination modulation systems.
Dekhtiarenko, Iryna; Ratts, Robert B; Blatnik, Renata; Lee, Lian N; Fischer, Sonja; Borkner, Lisa; Oduro, Jennifer D; Marandu, Thomas F; Hoppe, Stephanie; Ruzsics, Zsolt; Sonnemann, Julia K; Mansouri, Mandana; Meyer, Christine; Lemmermann, Niels A W; Holtappels, Rafaela; Arens, Ramon; Klenerman, Paul; Früh, Klaus; Reddehase, Matthias J; Riemer, Angelika B; Cicin-Sain, Luka
2016-12-01
Cytomegalovirus (CMV) elicits long-term T-cell immunity of unparalleled strength, which has allowed the development of highly protective CMV-based vaccine vectors. Counterintuitively, experimental vaccines encoding a single MHC-I restricted epitope offered better immune protection than those expressing entire proteins, including the same epitope. To clarify this conundrum, we generated recombinant murine CMVs (MCMVs) encoding well-characterized MHC-I epitopes at different positions within viral genes and observed strong immune responses and protection against viruses and tumor growth when the epitopes were expressed at the protein C-terminus. We used the M45-encoded conventional epitope HGIRNASFI to dissect this phenomenon at the molecular level. A recombinant MCMV expressing HGIRNASFI on the C-terminus of M45, in contrast to wild-type MCMV, enabled peptide processing by the constitutive proteasome, direct antigen presentation, and an inflation of antigen-specific effector memory cells. Consequently, our results indicate that constitutive proteasome processing of antigenic epitopes in latently infected cells is required for robust inflationary responses. This insight allows utilizing the epitope positioning in the design of CMV-based vectors as a novel strategy for enhancing their efficacy.
Smit, Bart A.; van Hylckama Vlieg, Johan E. T.; Engels, Wim J. M.; Meijer, Laura; Wouters, Jan T. M.; Smit, Gerrit
2005-01-01
The biochemical pathway for formation of branched-chain aldehydes, which are important flavor compounds derived from proteins in fermented dairy products, consists of a protease, peptidases, a transaminase, and a branched-chain α-keto acid decarboxylase (KdcA). The activity of the latter enzyme has been found only in a limited number of Lactococcus lactis strains. By using a random mutagenesis approach, the gene encoding KdcA in L. lactis B1157 was identified. The gene for this enzyme is highly homologous to the gene annotated ipd, which encodes a putative indole pyruvate decarboxylase, in L. lactis IL1403. Strain IL1403 does not produce KdcA, which could be explained by a 270-nucleotide deletion at the 3′ terminus of the ipd gene encoding a truncated nonfunctional decarboxylase. The kdcA gene was overexpressed in L. lactis for further characterization of the decarboxylase enzyme. Of all of the potential substrates tested, the highest activity was observed with branched-chain α-keto acids. Moreover, the enzyme activity was hardly affected by high salinity, and optimal activity was found at pH 6.3, indicating that the enzyme might be active under cheese ripening conditions. PMID:15640202
Blatnik, Renata; Lee, Lian N.; Fischer, Sonja; Borkner, Lisa; Oduro, Jennifer D.; Marandu, Thomas F.; Hoppe, Stephanie; Ruzsics, Zsolt; Sonnemann, Julia K.; Meyer, Christine; Holtappels, Rafaela; Arens, Ramon; Früh, Klaus; Reddehase, Matthias J.; Riemer, Angelika B.; Cicin-Sain, Luka
2016-01-01
Cytomegalovirus (CMV) elicits long-term T-cell immunity of unparalleled strength, which has allowed the development of highly protective CMV-based vaccine vectors. Counterintuitively, experimental vaccines encoding a single MHC-I restricted epitope offered better immune protection than those expressing entire proteins, including the same epitope. To clarify this conundrum, we generated recombinant murine CMVs (MCMVs) encoding well-characterized MHC-I epitopes at different positions within viral genes and observed strong immune responses and protection against viruses and tumor growth when the epitopes were expressed at the protein C-terminus. We used the M45-encoded conventional epitope HGIRNASFI to dissect this phenomenon at the molecular level. A recombinant MCMV expressing HGIRNASFI on the C-terminus of M45, in contrast to wild-type MCMV, enabled peptide processing by the constitutive proteasome, direct antigen presentation, and an inflation of antigen-specific effector memory cells. Consequently, our results indicate that constitutive proteasome processing of antigenic epitopes in latently infected cells is required for robust inflationary responses. This insight allows utilizing the epitope positioning in the design of CMV-based vectors as a novel strategy for enhancing their efficacy. PMID:27977791
Multiscale Methods, Parallel Computation, and Neural Networks for Real-Time Computer Vision.
NASA Astrophysics Data System (ADS)
Battiti, Roberto
1990-01-01
This thesis presents new algorithms for low and intermediate level computer vision. The guiding ideas in the presented approach are those of hierarchical and adaptive processing, concurrent computation, and supervised learning. Processing of the visual data at different resolutions is used not only to reduce the amount of computation necessary to reach the fixed point, but also to produce a more accurate estimation of the desired parameters. The presented adaptive multiple scale technique is applied to the problem of motion field estimation. Different parts of the image are analyzed at a resolution that is chosen in order to minimize the error in the coefficients of the differential equations to be solved. Tests with video-acquired images show that velocity estimation is more accurate over a wide range of motion with respect to the homogeneous scheme. In some cases introduction of explicit discontinuities coupled to the continuous variables can be used to avoid propagation of visual information from areas corresponding to objects with different physical and/or kinematic properties. The human visual system uses concurrent computation in order to process the vast amount of visual data in "real -time." Although with different technological constraints, parallel computation can be used efficiently for computer vision. All the presented algorithms have been implemented on medium grain distributed memory multicomputers with a speed-up approximately proportional to the number of processors used. A simple two-dimensional domain decomposition assigns regions of the multiresolution pyramid to the different processors. The inter-processor communication needed during the solution process is proportional to the linear dimension of the assigned domain, so that efficiency is close to 100% if a large region is assigned to each processor. Finally, learning algorithms are shown to be a viable technique to engineer computer vision systems for different applications starting from multiple-purpose modules. In the last part of the thesis a well known optimization method (the Broyden-Fletcher-Goldfarb-Shanno memoryless quasi -Newton method) is applied to simple classification problems and shown to be superior to the "error back-propagation" algorithm for numerical stability, automatic selection of parameters, and convergence properties.
HERMES: Hadamard Encoding and Reconstruction of MEGA-Edited Spectroscopy
Chan, Kimberly L.; Puts, Nicolaas A. J.; Schär, Michael; Barker, Peter B.; Edden, Richard A. E.
2017-01-01
Purpose To investigate a novel Hadamard-encoded spectral editing scheme and evaluate its performance in simultaneously quantifying N-acetyl aspartate (NAA) and N-acetyl aspartyl glutamate (NAAG) at 3 Tesla. Methods Editing pulses applied according to a Hadamard encoding scheme allow the simultaneous acquisition of multiple metabolites. The method, called HERMES (Hadamard Encoding and Reconstruction of MEGA-Edited Spectroscopy), was optimized to detect NAA and NAAG simultaneously using density-matrix simulations and validated in phantoms at 3T. In vivo data were acquired in the centrum semiovale of 12 normal subjects. The NAA:NAAG concentration ratio was determined by modeling in vivo data using simulated basis functions. Simulations were also performed for potentially coedited molecules with signals within the detected NAA/NAAG region. Results Simulations and phantom experiments show excellent segregation of NAA and NAAG signals into the intended spectra, with minimal crosstalk. Multiplet patterns show good agreement between simulations and phantom and in vivo data. In vivo measurements show that the relative peak intensities of the NAA and NAAG spectra are consistent with a NAA:NAAG concentration ratio of 4.22:1 in good agreement with literature. Simulations indicate some coediting of aspartate and glutathione near the detected region (editing efficiency: 4.5% and 78.2%, respectively, for the NAAG reconstruction and 5.1% and 19.5%, respectively, for the NAA reconstruction). Conclusion The simultaneous and separable detection of two otherwise overlapping metabolites using HERMES is possible at 3T. PMID:27089868
Development and implementation of an 84-channel matrix gradient coil.
Littin, Sebastian; Jia, Feng; Layton, Kelvin J; Kroboth, Stefan; Yu, Huijun; Hennig, Jürgen; Zaitsev, Maxim
2018-02-01
Design, implement, integrate, and characterize a customized coil system that allows for generating spatial encoding magnetic fields (SEMs) in a highly-flexible fashion. A gradient coil with a high number of individual elements was designed. Dimensions of the coil were chosen to mimic a whole-body gradient system, scaled down to a head insert. Mechanical shape and wire layout of each element were optimized to increase the local gradient strength while minimizing eddy current effects and simultaneously considering manufacturing constraints. Resulting wire layout and mechanical design is presented. A prototype matrix gradient coil with 12 × 7 = 84 elements consisting of two element types was realized and characterized. Measured eddy currents are <1% of the original field. The coil is shown to be capable of creating nonlinear, and linear SEMs. In a DSV of 0.22 m gradient strengths between 24 mT∕m and 78 mT∕m could be realized locally with maximum currents of 150 A. Initial proof-of-concept imaging experiments using linear and nonlinear encoding fields are demonstrated. A shielded matrix gradient coil setup capable of generating encoding fields in a highly-flexible manner was designed and implemented. The presented setup is expected to serve as a basis for validating novel imaging techniques that rely on nonlinear spatial encoding fields. Magn Reson Med 79:1181-1191, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Florek, Nicholas W; Weinfurter, Jason T; Jegaskanda, Sinthujan; Brewoo, Joseph N; Powell, Tim D; Young, Ginger R; Das, Subash C; Hatta, Masato; Broman, Karl W; Hungnes, Olav; Dudman, Susanne G; Kawaoka, Yoshihiro; Kent, Stephen J; Stinchcomb, Dan T; Osorio, Jorge E; Friedrich, Thomas C
2014-11-01
Current influenza virus vaccines primarily aim to induce neutralizing antibodies (NAbs). Modified vaccinia virus Ankara (MVA) is a safe and well-characterized vector for inducing both antibody and cellular immunity. We evaluated the immunogenicity and protective efficacy of MVA encoding influenza virus hemagglutinin (HA) and/or nucleoprotein (NP) in cynomolgus macaques. Animals were given 2 doses of MVA-based vaccines 4 weeks apart and were challenged with a 2009 pandemic H1N1 isolate (H1N1pdm) 8 weeks after the last vaccination. MVA-based vaccines encoding HA induced potent serum antibody responses against homologous H1 or H5 HAs but did not stimulate strong T cell responses prior to challenge. However, animals that received MVA encoding influenza virus HA and/or NP had high frequencies of virus-specific CD4(+) and CD8(+) T cell responses within the first 7 days of H1N1pdm infection, while animals vaccinated with MVA encoding irrelevant antigens did not. We detected little or no H1N1pdm replication in animals that received vaccines encoding H1 (homologous) HA, while a vaccine encoding NP from an H5N1 isolate afforded no protection. Surprisingly, H1N1pdm viral shedding was reduced in animals vaccinated with MVA encoding HA and NP from an H5N1 isolate. This reduced shedding was associated with cross-reactive antibodies capable of mediating antibody-dependent cellular cytotoxicity (ADCC) effector functions. Our results suggest that ADCC plays a role in cross-protective immunity against influenza. Vaccines optimized to stimulate cross-reactive antibodies with ADCC function may provide an important measure of protection against emerging influenza viruses when NAbs are ineffective. Current influenza vaccines are designed to elicit neutralizing antibodies (NAbs). Vaccine-induced NAbs typically are effective but highly specific for particular virus strains. Consequently, current vaccines are poorly suited for preventing the spread of newly emerging pandemic viruses. Therefore, we evaluated a vaccine strategy designed to induce both antibody and T cell responses, which may provide more broadly cross-protective immunity against influenza. Here, we show in a translational primate model that vaccination with a modified vaccinia virus Ankara encoding hemagglutinin from a heterosubtypic H5N1 virus was associated with reduced shedding of a pandemic H1N1 virus challenge, while vaccination with MVA encoding nucleoprotein, an internal viral protein, was not. Unexpectedly, this reduced shedding was associated with nonneutralizing antibodies that bound H1 hemagglutinin and activated natural killer cells. Therefore, antibody-dependent cellular cytotoxicity (ADCC) may play a role in cross-protective immunity to influenza virus. Vaccines that stimulate ADCC antibodies may enhance protection against pandemic influenza virus. Copyright © 2014, American Society for Microbiology. All Rights Reserved.
Florek, Nicholas W.; Weinfurter, Jason T.; Jegaskanda, Sinthujan; Brewoo, Joseph N.; Powell, Tim D.; Young, Ginger R.; Das, Subash C.; Hatta, Masato; Broman, Karl W.; Hungnes, Olav; Dudman, Susanne G.; Kawaoka, Yoshihiro; Kent, Stephen J.; Stinchcomb, Dan T.
2014-01-01
ABSTRACT Current influenza virus vaccines primarily aim to induce neutralizing antibodies (NAbs). Modified vaccinia virus Ankara (MVA) is a safe and well-characterized vector for inducing both antibody and cellular immunity. We evaluated the immunogenicity and protective efficacy of MVA encoding influenza virus hemagglutinin (HA) and/or nucleoprotein (NP) in cynomolgus macaques. Animals were given 2 doses of MVA-based vaccines 4 weeks apart and were challenged with a 2009 pandemic H1N1 isolate (H1N1pdm) 8 weeks after the last vaccination. MVA-based vaccines encoding HA induced potent serum antibody responses against homologous H1 or H5 HAs but did not stimulate strong T cell responses prior to challenge. However, animals that received MVA encoding influenza virus HA and/or NP had high frequencies of virus-specific CD4+ and CD8+ T cell responses within the first 7 days of H1N1pdm infection, while animals vaccinated with MVA encoding irrelevant antigens did not. We detected little or no H1N1pdm replication in animals that received vaccines encoding H1 (homologous) HA, while a vaccine encoding NP from an H5N1 isolate afforded no protection. Surprisingly, H1N1pdm viral shedding was reduced in animals vaccinated with MVA encoding HA and NP from an H5N1 isolate. This reduced shedding was associated with cross-reactive antibodies capable of mediating antibody-dependent cellular cytotoxicity (ADCC) effector functions. Our results suggest that ADCC plays a role in cross-protective immunity against influenza. Vaccines optimized to stimulate cross-reactive antibodies with ADCC function may provide an important measure of protection against emerging influenza viruses when NAbs are ineffective. IMPORTANCE Current influenza vaccines are designed to elicit neutralizing antibodies (NAbs). Vaccine-induced NAbs typically are effective but highly specific for particular virus strains. Consequently, current vaccines are poorly suited for preventing the spread of newly emerging pandemic viruses. Therefore, we evaluated a vaccine strategy designed to induce both antibody and T cell responses, which may provide more broadly cross-protective immunity against influenza. Here, we show in a translational primate model that vaccination with a modified vaccinia virus Ankara encoding hemagglutinin from a heterosubtypic H5N1 virus was associated with reduced shedding of a pandemic H1N1 virus challenge, while vaccination with MVA encoding nucleoprotein, an internal viral protein, was not. Unexpectedly, this reduced shedding was associated with nonneutralizing antibodies that bound H1 hemagglutinin and activated natural killer cells. Therefore, antibody-dependent cellular cytotoxicity (ADCC) may play a role in cross-protective immunity to influenza virus. Vaccines that stimulate ADCC antibodies may enhance protection against pandemic influenza virus. PMID:25210172
Computational Tools and Algorithms for Designing Customized Synthetic Genes
Gould, Nathan; Hendy, Oliver; Papamichail, Dimitris
2014-01-01
Advances in DNA synthesis have enabled the construction of artificial genes, gene circuits, and genomes of bacterial scale. Freedom in de novo design of synthetic constructs provides significant power in studying the impact of mutations in sequence features, and verifying hypotheses on the functional information that is encoded in nucleic and amino acids. To aid this goal, a large number of software tools of variable sophistication have been implemented, enabling the design of synthetic genes for sequence optimization based on rationally defined properties. The first generation of tools dealt predominantly with singular objectives such as codon usage optimization and unique restriction site incorporation. Recent years have seen the emergence of sequence design tools that aim to evolve sequences toward combinations of objectives. The design of optimal protein-coding sequences adhering to multiple objectives is computationally hard, and most tools rely on heuristics to sample the vast sequence design space. In this review, we study some of the algorithmic issues behind gene optimization and the approaches that different tools have adopted to redesign genes and optimize desired coding features. We utilize test cases to demonstrate the efficiency of each approach, as well as identify their strengths and limitations. PMID:25340050
NASA Astrophysics Data System (ADS)
Wang, He; Zhang, Wen-Hao; Wong, K. Y. Michael; Wu, Si
Extensive studies suggest that the brain integrates multisensory signals in a Bayesian optimal way. However, it remains largely unknown how the sensory reliability and the prior information shape the neural architecture. In this work, we propose a biologically plausible neural field model, which can perform optimal multisensory integration and encode the whole profile of the posterior. Our model is composed of two modules, each for one modality. The crosstalks between the two modules can be carried out through feedforwad cross-links and reciprocal connections. We found that the reciprocal couplings are crucial to optimal multisensory integration in that the reciprocal coupling pattern is shaped by the correlation in the joint prior distribution of the sensory stimuli. A perturbative approach is developed to illustrate the relation between the prior information and features in coupling patterns quantitatively. Our results show that a decentralized architecture based on reciprocal connections is able to accommodate complex correlation structures across modalities and utilize this prior information in optimal multisensory integration. This work is supported by the Research Grants Council of Hong Kong (N_HKUST606/12 and 605813) and National Basic Research Program of China (2014CB846101) and the Natural Science Foundation of China (31261160495).
Efficient search, mapping, and optimization of multi-protein genetic systems in diverse bacteria
Farasat, Iman; Kushwaha, Manish; Collens, Jason; Easterbrook, Michael; Guido, Matthew; Salis, Howard M
2014-01-01
Developing predictive models of multi-protein genetic systems to understand and optimize their behavior remains a combinatorial challenge, particularly when measurement throughput is limited. We developed a computational approach to build predictive models and identify optimal sequences and expression levels, while circumventing combinatorial explosion. Maximally informative genetic system variants were first designed by the RBS Library Calculator, an algorithm to design sequences for efficiently searching a multi-protein expression space across a > 10,000-fold range with tailored search parameters and well-predicted translation rates. We validated the algorithm's predictions by characterizing 646 genetic system variants, encoded in plasmids and genomes, expressed in six gram-positive and gram-negative bacterial hosts. We then combined the search algorithm with system-level kinetic modeling, requiring the construction and characterization of 73 variants to build a sequence-expression-activity map (SEAMAP) for a biosynthesis pathway. Using model predictions, we designed and characterized 47 additional pathway variants to navigate its activity space, find optimal expression regions with desired activity response curves, and relieve rate-limiting steps in metabolism. Creating sequence-expression-activity maps accelerates the optimization of many protein systems and allows previous measurements to quantitatively inform future designs. PMID:24952589
Actor-critic-based optimal tracking for partially unknown nonlinear discrete-time systems.
Kiumarsi, Bahare; Lewis, Frank L
2015-01-01
This paper presents a partially model-free adaptive optimal control solution to the deterministic nonlinear discrete-time (DT) tracking control problem in the presence of input constraints. The tracking error dynamics and reference trajectory dynamics are first combined to form an augmented system. Then, a new discounted performance function based on the augmented system is presented for the optimal nonlinear tracking problem. In contrast to the standard solution, which finds the feedforward and feedback terms of the control input separately, the minimization of the proposed discounted performance function gives both feedback and feedforward parts of the control input simultaneously. This enables us to encode the input constraints into the optimization problem using a nonquadratic performance function. The DT tracking Bellman equation and tracking Hamilton-Jacobi-Bellman (HJB) are derived. An actor-critic-based reinforcement learning algorithm is used to learn the solution to the tracking HJB equation online without requiring knowledge of the system drift dynamics. That is, two neural networks (NNs), namely, actor NN and critic NN, are tuned online and simultaneously to generate the optimal bounded control policy. A simulation example is given to show the effectiveness of the proposed method.
Bioluminescence Monitoring of Neuronal Activity in Freely Moving Zebrafish Larvae
Knafo, Steven; Prendergast, Andrew; Thouvenin, Olivier; Figueiredo, Sophie Nunes; Wyart, Claire
2017-01-01
The proof of concept for bioluminescence monitoring of neural activity in zebrafish with the genetically encoded calcium indicator GFP-aequorin has been previously described (Naumann et al., 2010) but challenges remain. First, bioluminescence signals originating from a single muscle fiber can constitute a major pitfall. Second, bioluminescence signals emanating from neurons only are very small. To improve signals while verifying specificity, we provide an optimized 4 steps protocol achieving: 1) selective expression of a zebrafish codon-optimized GFP-aequorin, 2) efficient soaking of larvae in GFP-aequorin substrate coelenterazine, 3) bioluminescence monitoring of neural activity from motor neurons in free-tailed moving animals performing acoustic escapes and 4) verification of the absence of muscle expression using immunohistochemistry. PMID:29130058
Noussa-Yao, Joseph; Heudes, Didier; Escudie, Jean-Baptiste; Degoulet, Patrice
2016-01-01
Short-stay MSO (Medicine, Surgery, Obstetrics) hospitalization activities in public and private hospitals providing public services are funded through charges for the services provided (T2A in French). Coding must be well matched to the severity of the patient's condition, to ensure that appropriate funding is provided to the hospital. We propose the use of an autocompletion process and multidimensional matrix, to help physicians to improve the expression of information and to optimize clinical coding. With this approach, physicians without knowledge of the encoding rules begin from a rough concept, which is gradually refined through semantic proximity and uses information on the associated codes stemming of optimized knowledge bases of diagnosis code.
Real-time video compressing under DSP/BIOS
NASA Astrophysics Data System (ADS)
Chen, Qiu-ping; Li, Gui-ju
2009-10-01
This paper presents real-time MPEG-4 Simple Profile video compressing based on the DSP processor. The programming framework of video compressing is constructed using TMS320C6416 Microprocessor, TDS510 simulator and PC. It uses embedded real-time operating system DSP/BIOS and the API functions to build periodic function, tasks and interruptions etcs. Realize real-time video compressing. To the questions of data transferring among the system. Based on the architecture of the C64x DSP, utilized double buffer switched and EDMA data transfer controller to transit data from external memory to internal, and realize data transition and processing at the same time; the architecture level optimizations are used to improve software pipeline. The system used DSP/BIOS to realize multi-thread scheduling. The whole system realizes high speed transition of a great deal of data. Experimental results show the encoder can realize real-time encoding of 768*576, 25 frame/s video images.
Optimized distortion correction technique for echo planar imaging.
Chen , N K; Wyrwicz, A M
2001-03-01
A new phase-shifted EPI pulse sequence is described that encodes EPI phase errors due to all off-resonance factors, including B(o) field inhomogeneity, eddy current effects, and gradient waveform imperfections. Combined with the previously proposed multichannel modulation postprocessing algorithm (Chen and Wyrwicz, MRM 1999;41:1206-1213), the encoded phase error information can be used to effectively remove geometric distortions in subsequent EPI scans. The proposed EPI distortion correction technique has been shown to be effective in removing distortions due to gradient waveform imperfections and phase gradient-induced eddy current effects. In addition, this new method retains advantages of the earlier method, such as simultaneous correction of different off-resonance factors without use of a complicated phase unwrapping procedure. The effectiveness of this technique is illustrated with EPI studies on phantoms and animal subjects. Implementation to different versions of EPI sequences is also described. Magn Reson Med 45:525-528, 2001. Copyright 2001 Wiley-Liss, Inc.
Zhang, Jianfeng; Jex, Edward; Feng, Tsungwei; Sivko, Gloria S; Baillie, Leslie W; Goldman, Stanley; Van Kampen, Kent R; Tang, De-chu C
2013-01-01
Bacillus anthracis is the causative agent of anthrax, and its spores have been developed into lethal bioweapons. To mitigate an onslaught from airborne anthrax spores that are maliciously disseminated, it is of paramount importance to develop a rapid-response anthrax vaccine that can be mass administered by nonmedical personnel during a crisis. We report here that intranasal instillation of a nonreplicating adenovirus vector encoding B. anthracis protective antigen could confer rapid and sustained protection against inhalation anthrax in mice in a single-dose regimen in the presence of preexisting adenovirus immunity. The potency of the vaccine was greatly enhanced when codons of the antigen gene were optimized to match the tRNA pool found in human cells. In addition, an adenovirus vector encoding lethal factor can confer partial protection against inhalation anthrax and might be coadministered with a protective antigen-based vaccine.
Jex, Edward; Feng, Tsungwei; Sivko, Gloria S.; Baillie, Leslie W.; Goldman, Stanley; Van Kampen, Kent R.; Tang, De-chu C.
2013-01-01
Bacillus anthracis is the causative agent of anthrax, and its spores have been developed into lethal bioweapons. To mitigate an onslaught from airborne anthrax spores that are maliciously disseminated, it is of paramount importance to develop a rapid-response anthrax vaccine that can be mass administered by nonmedical personnel during a crisis. We report here that intranasal instillation of a nonreplicating adenovirus vector encoding B. anthracis protective antigen could confer rapid and sustained protection against inhalation anthrax in mice in a single-dose regimen in the presence of preexisting adenovirus immunity. The potency of the vaccine was greatly enhanced when codons of the antigen gene were optimized to match the tRNA pool found in human cells. In addition, an adenovirus vector encoding lethal factor can confer partial protection against inhalation anthrax and might be coadministered with a protective antigen-based vaccine. PMID:23100479
Analyzing pERK Activation During Planarian Regeneration.
Fraguas, Susanna; Umesono, Yoshihiko; Agata, Kiyokazu; Cebrià, Francesc
2017-01-01
Planarians are an ideal model in which to study stem cell-based regeneration. After amputation, planarian pluripotent stem cells surrounding the wound proliferate to produce the regenerative blastema, in which they differentiate into the missing tissues and structures. Recent independent studies in planarians have shown that Smed-egfr-3, a gene encoding a homologue of epidermal growth factor (EGF) receptors, and DjerkA, which encodes an extracellular signal-regulated kinase (ERK), may control cell differentiation and blastema growth. However, because these studies were carried in two different planarian species, the relationship between these two genes remains unclear. We have optimized anti-pERK immunostaining in Schmidtea mediterranea using the original protocol developed in Dugesia japonica. Both protocols are reported here as most laboratories worldwide work with one of these two species. Using this protocol we have determined that Smed-egfr-3 appears to be necessary for pERK activation during planarian regeneration.
NASA Astrophysics Data System (ADS)
Bright, Ido; Lin, Guang; Kutz, J. Nathan
2013-12-01
Compressive sensing is used to determine the flow characteristics around a cylinder (Reynolds number and pressure/flow field) from a sparse number of pressure measurements on the cylinder. Using a supervised machine learning strategy, library elements encoding the dimensionally reduced dynamics are computed for various Reynolds numbers. Convex L1 optimization is then used with a limited number of pressure measurements on the cylinder to reconstruct, or decode, the full pressure field and the resulting flow field around the cylinder. Aside from the highly turbulent regime (large Reynolds number) where only the Reynolds number can be identified, accurate reconstruction of the pressure field and Reynolds number is achieved. The proposed data-driven strategy thus achieves encoding of the fluid dynamics using the L2 norm, and robust decoding (flow field reconstruction) using the sparsity promoting L1 norm.
The child brain computes and utilizes internalized maternal choices
Lim, Seung-Lark; Cherry, J. Bradley C.; Davis, Ann M.; Balakrishnan, S. N.; Ha, Oh-Ryeong; Bruce, Jared M.; Bruce, Amanda S.
2016-01-01
As children grow, they gradually learn how to make decisions independently. However, decisions like choosing healthy but less-tasty foods can be challenging for children whose self-regulation and executive cognitive functions are still maturing. We propose a computational decision-making process in which children estimate their mother's choices for them as well as their individual food preferences. By employing functional magnetic resonance imaging during real food choices, we find that the ventromedial prefrontal cortex (vmPFC) encodes children's own preferences and the left dorsolateral prefrontal cortex (dlPFC) encodes the projected mom's choices for them at the time of children's choice. Also, the left dlPFC region shows an inhibitory functional connectivity with the vmPFC at the time of children's own choice. Our study suggests that in part, children utilize their perceived caregiver's choices when making choices for themselves, which may serve as an external regulator of decision-making, leading to optimal healthy decisions. PMID:27218420
Effects of strategy on visual working memory capacity
Bengson, Jesse J.; Luck, Steven J.
2015-01-01
Substantial evidence suggests that individual differences in estimates of working memory capacity reflect differences in how effectively people use their intrinsic storage capacity. This suggests that estimated capacity could be increased by instructions that encourage more effective encoding strategies. The present study tested this by giving different participants explicit strategy instructions in a change detection task. Compared to a condition in which participants were simply told to do their best, we found that estimated capacity was increased for participants who were instructed to remember the entire visual display, even at set sizes beyond their capacity. However, no increase in estimated capacity was found for a group that was told to focus on a subset of the items in supracapacity arrays. This finding confirms the hypothesis that encoding strategies may influence visual working memory performance, and it is contrary to the hypothesis that the optimal strategy is to filter out any items beyond the storage capacity. PMID:26139356
Effects of strategy on visual working memory capacity.
Bengson, Jesse J; Luck, Steven J
2016-02-01
Substantial evidence suggests that individual differences in estimates of working memory capacity reflect differences in how effectively people use their intrinsic storage capacity. This suggests that estimated capacity could be increased by instructions that encourage more effective encoding strategies. The present study tested this by giving different participants explicit strategy instructions in a change detection task. Compared to a condition in which participants were simply told to do their best, we found that estimated capacity was increased for participants who were instructed to remember the entire visual display, even at set sizes beyond their capacity. However, no increase in estimated capacity was found for a group that was told to focus on a subset of the items in supracapacity arrays. This finding confirms the hypothesis that encoding strategies may influence visual working memory performance, and it is contrary to the hypothesis that the optimal strategy is to filter out any items beyond the storage capacity.
Efficient Quantum Transmission in Multiple-Source Networks
Luo, Ming-Xing; Xu, Gang; Chen, Xiu-Bo; Yang, Yi-Xian; Wang, Xiaojun
2014-01-01
A difficult problem in quantum network communications is how to efficiently transmit quantum information over large-scale networks with common channels. We propose a solution by developing a quantum encoding approach. Different quantum states are encoded into a coherent superposition state using quantum linear optics. The transmission congestion in the common channel may be avoided by transmitting the superposition state. For further decoding and continued transmission, special phase transformations are applied to incoming quantum states using phase shifters such that decoders can distinguish outgoing quantum states. These phase shifters may be precisely controlled using classical chaos synchronization via additional classical channels. Based on this design and the reduction of multiple-source network under the assumption of restricted maximum-flow, the optimal scheme is proposed for specially quantized multiple-source network. In comparison with previous schemes, our scheme can greatly increase the transmission efficiency. PMID:24691590
Identifying musical pieces from fMRI data using encoding and decoding models.
Hoefle, Sebastian; Engel, Annerose; Basilio, Rodrigo; Alluri, Vinoo; Toiviainen, Petri; Cagy, Maurício; Moll, Jorge
2018-02-02
Encoding models can reveal and decode neural representations in the visual and semantic domains. However, a thorough understanding of how distributed information in auditory cortices and temporal evolution of music contribute to model performance is still lacking in the musical domain. We measured fMRI responses during naturalistic music listening and constructed a two-stage approach that first mapped musical features in auditory cortices and then decoded novel musical pieces. We then probed the influence of stimuli duration (number of time points) and spatial extent (number of voxels) on decoding accuracy. Our approach revealed a linear increase in accuracy with duration and a point of optimal model performance for the spatial extent. We further showed that Shannon entropy is a driving factor, boosting accuracy up to 95% for music with highest information content. These findings provide key insights for future decoding and reconstruction algorithms and open new venues for possible clinical applications.
Power-Aware Compiler Controllable Chip Multiprocessor
NASA Astrophysics Data System (ADS)
Shikano, Hiroaki; Shirako, Jun; Wada, Yasutaka; Kimura, Keiji; Kasahara, Hironori
A power-aware compiler controllable chip multiprocessor (CMP) is presented and its performance and power consumption are evaluated with the optimally scheduled advanced multiprocessor (OSCAR) parallelizing compiler. The CMP is equipped with power control registers that change clock frequency and power supply voltage to functional units including processor cores, memories, and an interconnection network. The OSCAR compiler carries out coarse-grain task parallelization of programs and reduces power consumption using architectural power control support and the compiler's power saving scheme. The performance evaluation shows that MPEG-2 encoding on the proposed CMP with four CPUs results in 82.6% power reduction in real-time execution mode with a deadline constraint on its sequential execution time. Furthermore, MP3 encoding on a heterogeneous CMP with four CPUs and four accelerators results in 53.9% power reduction at 21.1-fold speed-up in performance against its sequential execution in the fastest execution mode.
FRET and BRET-based biosensors in live cell compound screens.
Robinson, Katie Herbst; Yang, Jessica R; Zhang, Jin
2014-01-01
Live cell compound screening with genetically encoded fluorescence or bioluminescence-based biosensors offers a potentially powerful approach to identify novel regulators of a signaling event of interest. In particular, compound screening in living cells has the added benefit that the entire signaling network remains intact, and thus the screen is not just against a single molecule of interest but against any molecule within the signaling network that may modulate the distinct signaling event reported by the biosensor in use. Furthermore, only molecules that are cell permeable or act at cell surface receptors will be identified as "hits," thus reducing further optimization of the compound in terms of cell penetration. Here we discuss a detailed protocol for using genetically encoded biosensors in living cells in a 96-well format for the execution of high throughput compound screens and the identification of small molecules which modulate a signaling event of interest.
A genetically encoded fluorescent sensor of ERK activity.
Harvey, Christopher D; Ehrhardt, Anka G; Cellurale, Cristina; Zhong, Haining; Yasuda, Ryohei; Davis, Roger J; Svoboda, Karel
2008-12-09
The activity of the ERK has complex spatial and temporal dynamics that are important for the specificity of downstream effects. However, current biochemical techniques do not allow for the measurement of ERK signaling with fine spatiotemporal resolution. We developed a genetically encoded, FRET-based sensor of ERK activity (the extracellular signal-regulated kinase activity reporter, EKAR), optimized for signal-to-noise ratio and fluorescence lifetime imaging. EKAR selectively and reversibly reported ERK activation in HEK293 cells after epidermal growth factor stimulation. EKAR signals were correlated with ERK phosphorylation, required ERK activity, and did not report the activities of JNK or p38. EKAR reported ERK activation in the dendrites and nucleus of hippocampal pyramidal neurons in brain slices after theta-burst stimuli or trains of back-propagating action potentials. EKAR therefore permits the measurement of spatiotemporal ERK signaling dynamics in living cells, including in neuronal compartments in intact tissues.
Sundvall, Erik; Wei-Kleiner, Fang; Freire, Sergio M; Lambrix, Patrick
2017-01-01
Archetype-based Electronic Health Record (EHR) systems using generic reference models from e.g. openEHR, ISO 13606 or CIMI should be easy to update and reconfigure with new types (or versions) of data models or entries, ideally with very limited programming or manual database tweaking. Exploratory research (e.g. epidemiology) leading to ad-hoc querying on a population-wide scale can be a challenge in such environments. This publication describes implementation and test of an archetype-aware Dewey encoding optimization that can be used to produce such systems in environments supporting relational operations, e.g. RDBMs and distributed map-reduce frameworks like Hadoop. Initial testing was done using a nine-node 2.2 GHz quad-core Hadoop cluster querying a dataset consisting of targeted extracts from 4+ million real patient EHRs, query results with sub-minute response time were obtained.
U.S. Geological Survey DLG-3 and Bureau of the Census TIGER data. Development and GIS applications
Batten, Lawrence G.
1990-01-01
The U.S. Geological Survey has been actively developing digital cartographic and geographic data and standards since the early 1970's. One product is Digital Line Graph data, which offer a consistently accurate source of base category geographic information. The Bureau of the Census has combined their Dual Independent Map Encoding data with the Geological Survey's 1:100,000-scale Digital Line Graph data to prepare for the 1990 decennial census. The resulting Topologically Integrated Geographic Encoding and Referencing data offer a wealth of information. A major area of research using these data is in transportation analysis. The attributes associated with Digital Line Graphs can be used to determine the average travel times along each segment. Geographic information system functions can then be used to optimize routes through the network and to generate street name lists. Additional aspects of the subject are discussed.
Multiplex PCR for Rapid Detection of Genes Encoding Class A Carbapenemases
Hong, Sang Sook; Kim, Kyeongmi; Huh, Ji Young; Jung, Bochan; Kang, Myung Seo
2012-01-01
In recent years, there have been increasing reports of KPC-producing Klebsiella pneumoniae in Korea. The modified Hodge test can be used as a phenotypic screening test for class A carbapenamase (CAC)-producing clinical isolates; however, it does not distinguish between carbapenemase types. The confirmation of type of CAC is important to ensure optimal therapy and to prevent transmission. This study applied a novel multiplex PCR assay to detect and differentiate CAC genes in a single reaction. Four primer pairs were designed to amplify fragments encoding 4 CAC families (SME, IMI/NMC-A, KPC, and GES). The multiplex PCR detected all genes tested for 4 CAC families that could be differentiated by fragment size according to gene type. This multiplex PCR offers a simple and useful approach for detecting and distinguishing CAC genes in carbapenem-resistant strains that are metallo-β-lactamase nonproducers. PMID:22950072
Multiplex PCR for rapid detection of genes encoding class A carbapenemases.
Hong, Sang Sook; Kim, Kyeongmi; Huh, Ji Young; Jung, Bochan; Kang, Myung Seo; Hong, Seong Geun
2012-09-01
In recent years, there have been increasing reports of KPC-producing Klebsiella pneumoniae in Korea. The modified Hodge test can be used as a phenotypic screening test for class A carbapenamase (CAC)-producing clinical isolates; however, it does not distinguish between carbapenemase types. The confirmation of type of CAC is important to ensure optimal therapy and to prevent transmission. This study applied a novel multiplex PCR assay to detect and differentiate CAC genes in a single reaction. Four primer pairs were designed to amplify fragments encoding 4 CAC families (SME, IMI/NMC-A, KPC, and GES). The multiplex PCR detected all genes tested for 4 CAC families that could be differentiated by fragment size according to gene type. This multiplex PCR offers a simple and useful approach for detecting and distinguishing CAC genes in carbapenem-resistant strains that are metallo-β-lactamase nonproducers.
Jin, Zhao; Di Rienzi, Sara C.; Janzon, Anders; Werner, Jeff J.; Angenent, Largus T.; Dangl, Jeffrey L.; Fowler, Douglas M.
2015-01-01
Metagenomes derived from environmental microbiota encode a vast diversity of protein homologs. How this diversity impacts protein function can be explored through selection assays aimed to optimize function. While artificially generated gene sequence pools are typically used in selection assays, their usage may be limited because of technical or ethical reasons. Here, we investigate an alternative strategy, the use of soil microbial DNA as a starting point. We demonstrate this approach by optimizing the function of a widely occurring soil bacterial enzyme, 1-aminocyclopropane-1-carboxylate (ACC) deaminase. We identified a specific ACC deaminase domain region (ACCD-DR) that, when PCR amplified from the soil, produced a variant pool that we could swap into functional plasmids carrying ACC deaminase-encoding genes. Functional clones of ACC deaminase were selected for in a competition assay based on their capacity to provide nitrogen to Escherichia coli in vitro. The most successful ACCD-DR variants were identified after multiple rounds of selection by sequence analysis. We observed that previously identified essential active-site residues were fixed in the original unselected library and that additional residues went to fixation after selection. We identified a divergent essential residue whose presence hints at the possible use of alternative substrates and a cluster of neutral residues that did not influence ACCD performance. Using an artificial ACCD-DR variant library generated by DNA oligomer synthesis, we validated the same fixation patterns. Our study demonstrates that soil metagenomes are useful starting pools of protein-coding-gene diversity that can be utilized for protein optimization and functional characterization when synthetic libraries are not appropriate. PMID:26637602
Song, Xiaokai; Zhao, Xiaofang; Xu, Lixin; Yan, Ruofeng; Li, Xiangrui
2017-04-01
In our previous study, an effective DNA vaccine encoding Eimeria tenella TA4 and chicken IL-2 was constructed. In the present study, the immunization dose of the DNA vaccine pVAX1.0-TA4-IL-2 was further optimized. With the optimized dose, the dynamics of antibodies induced by the DNA vaccine was determined using indirect ELISA. To evaluate the immune protection duration of the DNA vaccine, two-week-old chickens were intramuscularly immunized twice and the induced efficacy was evaluated by challenging with E. tenella at 5, 9, 13, 17 and 21weeks post the last immunization (PLI) separately. To evaluate the efficacy stability of the DNA vaccine, two-week-old chickens were immunized with 3 batches of the DNA vaccine, and the induced efficacy was evaluated by challenging with E. tenella. The results showed that the optimal dose was 25μg. The induced antibody level persisted until 10weeks PPI. For the challenge time of 5 and 9weeks PLI, the immunization resulted in ACIs of 182.28 and 162.23 beyond 160, showing effective protection. However, for the challenge time of 13, 17 and 21weeks PLI, the immunization resulted in ACIs below 160 which means poor protection. Therefore, the immune protection duration of the DNA vaccination was at least 9weeks PLI. DNA immunization with three batches DNA vaccine resulted in ACIs of 187.52, 191.57 and 185.22, which demonstrated that efficacies of the three batches DNA vaccine were effective and stable. Overall, our results indicate that DNA vaccine pVAX1.0-TA4-IL-2 has the potential to be developed as effective vaccine against coccidiosis. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Shengling; Cui, Yong; Koodli, Rajeev; Hou, Yibin; Huang, Zhangqin
Due to the dynamics of topology and resources, Call Admission Control (CAC) plays a significant role for increasing resource utilization ratio and guaranteeing users' QoS requirements in wireless/mobile networks. In this paper, a dynamic multi-threshold CAC scheme is proposed to serve multi-class service in a wireless/mobile network. The thresholds are renewed at the beginning of each time interval to react to the changing mobility rate and network load. To find suitable thresholds, a reward-penalty model is designed, which provides different priorities between different service classes and call types through different reward/penalty policies according to network load and average call arrival rate. To speed up the running time of CAC, an Optimized Genetic Algorithm (OGA) is presented, whose components such as encoding, population initialization, fitness function and mutation etc., are all optimized in terms of the traits of the CAC problem. The simulation demonstrates that the proposed CAC scheme outperforms the similar schemes, which means the optimization is realized. Finally, the simulation shows the efficiency of OGA.
Optimal patch code design via device characterization
NASA Astrophysics Data System (ADS)
Wu, Wencheng; Dalal, Edul N.
2012-01-01
In many color measurement applications, such as those for color calibration and profiling, "patch code" has been used successfully for job identification and automation to reduce operator errors. A patch code is similar to a barcode, but is intended primarily for use in measurement devices that cannot read barcodes due to limited spatial resolution, such as spectrophotometers. There is an inherent tradeoff between decoding robustness and the number of code levels available for encoding. Previous methods have attempted to address this tradeoff, but those solutions have been sub-optimal. In this paper, we propose a method to design optimal patch codes via device characterization. The tradeoff between decoding robustness and the number of available code levels is optimized in terms of printing and measurement efforts, and decoding robustness against noises from the printing and measurement devices. Effort is drastically reduced relative to previous methods because print-and-measure is minimized through modeling and the use of existing printer profiles. Decoding robustness is improved by distributing the code levels in CIE Lab space rather than in CMYK space.
Profiling Charge Complementarity and Selectivity for Binding at the Protein Surface
Sulea, Traian; Purisima, Enrico O.
2003-01-01
A novel analysis and representation of the protein surface in terms of electrostatic binding complementarity and selectivity is presented. The charge optimization methodology is applied in a probe-based approach that simulates the binding process to the target protein. The molecular surface is color coded according to calculated optimal charge or according to charge selectivity, i.e., the binding cost of deviating from the optimal charge. The optimal charge profile depends on both the protein shape and charge distribution whereas the charge selectivity profile depends only on protein shape. High selectivity is concentrated in well-shaped concave pockets, whereas solvent-exposed convex regions are not charge selective. This suggests the synergy of charge and shape selectivity hot spots toward molecular selection and recognition, as well as the asymmetry of charge selectivity at the binding interface of biomolecular systems. The charge complementarity and selectivity profiles map relevant electrostatic properties in a readily interpretable way and encode information that is quite different from that visualized in the standard electrostatic potential map of unbound proteins. PMID:12719221
Quantum Optimization of Fully Connected Spin Glasses
NASA Astrophysics Data System (ADS)
Venturelli, Davide; Mandrà, Salvatore; Knysh, Sergey; O'Gorman, Bryan; Biswas, Rupak; Smelyanskiy, Vadim
2015-07-01
Many NP-hard problems can be seen as the task of finding a ground state of a disordered highly connected Ising spin glass. If solutions are sought by means of quantum annealing, it is often necessary to represent those graphs in the annealer's hardware by means of the graph-minor embedding technique, generating a final Hamiltonian consisting of coupled chains of ferromagnetically bound spins, whose binding energy is a free parameter. In order to investigate the effect of embedding on problems of interest, the fully connected Sherrington-Kirkpatrick model with random ±1 couplings is programmed on the D-Wave TwoTM annealer using up to 270 qubits interacting on a Chimera-type graph. We present the best embedding prescriptions for encoding the Sherrington-Kirkpatrick problem in the Chimera graph. The results indicate that the optimal choice of embedding parameters could be associated with the emergence of the spin-glass phase of the embedded problem, whose presence was previously uncertain. This optimal parameter setting allows the performance of the quantum annealer to compete with (and potentially outperform, in the absence of analog control errors) optimized simulated annealing algorithms.
XY vs X Mixer in Quantum Alternating Operator Ansatz for Optimization Problems with Constraints
NASA Technical Reports Server (NTRS)
Wang, Zhihui; Rubin, Nicholas; Rieffel, Eleanor G.
2018-01-01
Quantum Approximate Optimization Algorithm, further generalized as Quantum Alternating Operator Ansatz (QAOA), is a family of algorithms for combinatorial optimization problems. It is a leading candidate to run on emerging universal quantum computers to gain insight into quantum heuristics. In constrained optimization, penalties are often introduced so that the ground state of the cost Hamiltonian encodes the solution (a standard practice in quantum annealing). An alternative is to choose a mixing Hamiltonian such that the constraint corresponds to a constant of motion and the quantum evolution stays in the feasible subspace. Better performance of the algorithm is speculated due to a much smaller search space. We consider problems with a constant Hamming weight as the constraint. We also compare different methods of generating the generalized W-state, which serves as a natural initial state for the Hamming-weight constraint. Using graph-coloring as an example, we compare the performance of using XY model as a mixer that preserves the Hamming weight with the performance of adding a penalty term in the cost Hamiltonian.
Direct discriminant locality preserving projection with Hammerstein polynomial expansion.
Chen, Xi; Zhang, Jiashu; Li, Defang
2012-12-01
Discriminant locality preserving projection (DLPP) is a linear approach that encodes discriminant information into the objective of locality preserving projection and improves its classification ability. To enhance the nonlinear description ability of DLPP, we can optimize the objective function of DLPP in reproducing kernel Hilbert space to form a kernel-based discriminant locality preserving projection (KDLPP). However, KDLPP suffers the following problems: 1) larger computational burden; 2) no explicit mapping functions in KDLPP, which results in more computational burden when projecting a new sample into the low-dimensional subspace; and 3) KDLPP cannot obtain optimal discriminant vectors, which exceedingly optimize the objective of DLPP. To overcome the weaknesses of KDLPP, in this paper, a direct discriminant locality preserving projection with Hammerstein polynomial expansion (HPDDLPP) is proposed. The proposed HPDDLPP directly implements the objective of DLPP in high-dimensional second-order Hammerstein polynomial space without matrix inverse, which extracts the optimal discriminant vectors for DLPP without larger computational burden. Compared with some other related classical methods, experimental results for face and palmprint recognition problems indicate the effectiveness of the proposed HPDDLPP.
QR images: optimized image embedding in QR codes.
Garateguy, Gonzalo J; Arce, Gonzalo R; Lau, Daniel L; Villarreal, Ofelia P
2014-07-01
This paper introduces the concept of QR images, an automatic method to embed QR codes into color images with bounded probability of detection error. These embeddings are compatible with standard decoding applications and can be applied to any color image with full area coverage. The QR information bits are encoded into the luminance values of the image, taking advantage of the immunity of QR readers against local luminance disturbances. To mitigate the visual distortion of the QR image, the algorithm utilizes halftoning masks for the selection of modified pixels and nonlinear programming techniques to locally optimize luminance levels. A tractable model for the probability of error is developed and models of the human visual system are considered in the quality metric used to optimize the luminance levels of the QR image. To minimize the processing time, the optimization techniques proposed to consider the mechanics of a common binarization method and are designed to be amenable for parallel implementations. Experimental results show the graceful degradation of the decoding rate and the perceptual quality as a function the embedding parameters. A visual comparison between the proposed and existing methods is presented.
NASA Astrophysics Data System (ADS)
Zhao, Zhao; Zhang, Jin; Li, Hai-yang; Zhou, Jian-yong
2017-01-01
The optimization of an LEO cooperative multi-spacecraft refueling mission considering the J2 perturbation and target's surplus propellant constraint is studied in the paper. First, a mission scenario is introduced. One service spacecraft and several target spacecraft run on an LEO near-circular orbit, the service spacecraft rendezvouses with some service positions one by one, and target spacecraft transfer to corresponding service positions respectively. Each target spacecraft returns to its original position after obtaining required propellant and the service spacecraft returns to its original position after refueling all target spacecraft. Next, an optimization model of this mission is built. The service sequence, orbital transfer time, and service position are used as deign variables, whereas the propellant cost is used as the design objective. The J2 perturbation, time constraint and the target spacecraft's surplus propellant capability constraint are taken into account. Then, a hybrid two-level optimization approach is presented to solve the formulated mixed integer nonlinear programming (MINLP) problem. A hybrid-encoding genetic algorithm is adopted to seek the near optimal solution in the up-level optimization, while a linear relative dynamic equation considering the J2 perturbation is used to obtain the impulses of orbital transfer in the low-level optimization. Finally, the effectiveness of the proposed model and method is validated by numerical examples.
Optimization of Boiling Water Reactor Loading Pattern Using Two-Stage Genetic Algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kobayashi, Yoko; Aiyoshi, Eitaro
2002-10-15
A new two-stage optimization method based on genetic algorithms (GAs) using an if-then heuristic rule was developed to generate optimized boiling water reactor (BWR) loading patterns (LPs). In the first stage, the LP is optimized using an improved GA operator. In the second stage, an exposure-dependent control rod pattern (CRP) is sought using GA with an if-then heuristic rule. The procedure of the improved GA is based on deterministic operators that consist of crossover, mutation, and selection. The handling of the encoding technique and constraint conditions by that GA reflects the peculiar characteristics of the BWR. In addition, strategies suchmore » as elitism and self-reproduction are effectively used in order to improve the search speed. The LP evaluations were performed with a three-dimensional diffusion code that coupled neutronic and thermal-hydraulic models. Strong axial heterogeneities and constraints dependent on three dimensions have always necessitated the use of three-dimensional core simulators for BWRs, so that optimization of computational efficiency is required. The proposed algorithm is demonstrated by successfully generating LPs for an actual BWR plant in two phases. One phase is only LP optimization applying the Haling technique. The other phase is an LP optimization that considers the CRP during reactor operation. In test calculations, candidates that shuffled fresh and burned fuel assemblies within a reasonable computation time were obtained.« less
Optimal wavelets for biomedical signal compression.
Nielsen, Mogens; Kamavuako, Ernest Nlandu; Andersen, Michael Midtgaard; Lucas, Marie-Françoise; Farina, Dario
2006-07-01
Signal compression is gaining importance in biomedical engineering due to the potential applications in telemedicine. In this work, we propose a novel scheme of signal compression based on signal-dependent wavelets. To adapt the mother wavelet to the signal for the purpose of compression, it is necessary to define (1) a family of wavelets that depend on a set of parameters and (2) a quality criterion for wavelet selection (i.e., wavelet parameter optimization). We propose the use of an unconstrained parameterization of the wavelet for wavelet optimization. A natural performance criterion for compression is the minimization of the signal distortion rate given the desired compression rate. For coding the wavelet coefficients, we adopted the embedded zerotree wavelet coding algorithm, although any coding scheme may be used with the proposed wavelet optimization. As a representative example of application, the coding/encoding scheme was applied to surface electromyographic signals recorded from ten subjects. The distortion rate strongly depended on the mother wavelet (for example, for 50% compression rate, optimal wavelet, mean+/-SD, 5.46+/-1.01%; worst wavelet 12.76+/-2.73%). Thus, optimization significantly improved performance with respect to previous approaches based on classic wavelets. The algorithm can be applied to any signal type since the optimal wavelet is selected on a signal-by-signal basis. Examples of application to ECG and EEG signals are also reported.
Spectral characteristics of convolutionally coded digital signals
NASA Technical Reports Server (NTRS)
Divsalar, D.
1979-01-01
The power spectral density of the output symbol sequence of a convolutional encoder is computed for two different input symbol stream source models, namely, an NRZ signaling format and a first order Markov source. In the former, the two signaling states of the binary waveform are not necessarily assumed to occur with equal probability. The effects of alternate symbol inversion on this spectrum are also considered. The mathematical results are illustrated with many examples corresponding to optimal performance codes.
Optimizing Sensing: From Water to the Web
2009-05-01
e.g., when applied to busi - ness practices, the Pareto Principle says that “80% of your sales come from 20% of your clients.” In water distribution...of this challenge included a realistic model of a real metropolitan area water distribution network (Figure 4(a)) with 12,527 nodes, as well as a...we learn a probabilistic model P (Y,XV) from training data collected by [37] from 20 users. This model encodes the statistical dependencies between
2010-03-01
uses all available resources in some optimized manner. By further exploiting the design flexibility and computational efficiency of Orthogonal Frequency...in the following sections. 3.2.1 Estimation of PU Signal Statistics. The Estimate PU Signal Statis- tics function of Fig 3.4 is used to compute the...consecutive PU transmissions, and 4) the probability of transitioning from one transmission state to another. These statistics are then used to compute the
A global evolutionary and metabolic analysis of human obesity gene risk variants.
Castillo, Joseph J; Hazlett, Zachary S; Orlando, Robert A; Garver, William S
2017-09-05
It is generally accepted that the selection of gene variants during human evolution optimized energy metabolism that now interacts with our obesogenic environment to increase the prevalence of obesity. The purpose of this study was to perform a global evolutionary and metabolic analysis of human obesity gene risk variants (110 human obesity genes with 127 nearest gene risk variants) identified using genome-wide association studies (GWAS) to enhance our knowledge of early and late genotypes. As a result of determining the mean frequency of these obesity gene risk variants in 13 available populations from around the world our results provide evidence for the early selection of ancestral risk variants (defined as selection before migration from Africa) and late selection of derived risk variants (defined as selection after migration from Africa). Our results also provide novel information for association of these obesity genes or encoded proteins with diverse metabolic pathways and other human diseases. The overall results indicate a significant differential evolutionary pattern for the selection of obesity gene ancestral and derived risk variants proposed to optimize energy metabolism in varying global environments and complex association with metabolic pathways and other human diseases. These results are consistent with obesity genes that encode proteins possessing a fundamental role in maintaining energy metabolism and survival during the course of human evolution. Copyright © 2017. Published by Elsevier B.V.
Hardt, Oliver; Nadel, Lynn
2009-01-01
Cognitive map theory suggested that exploring an environment and attending to a stimulus should lead to its integration into an allocentric environmental representation. We here report that directed attention in the form of exploration serves to gather information needed to determine an optimal spatial strategy, given task demands and characteristics of the environment. Attended environmental features may integrate into spatial representations if they meet the requirements of the optimal spatial strategy: when learning involves a cognitive mapping strategy, cues with high codability (e.g., concrete objects) will be incorporated into a map, but cues with low codability (e.g., abstract paintings) will not. However, instructions encouraging map learning can lead to the incorporation of cues with low codability. On the other hand, if spatial learning is not map-based, abstract cues can and will be used to encode locations. Since exploration appears to determine what strategy to apply and whether or not to encode a cue, recognition memory for environmental features is independent of whether or not a cue is part of a spatial representation. In fact, when abstract cues were used in a way that was not map-based, or when they were not used for spatial navigation at all, they were nevertheless recognized as familiar. Thus, the relation between exploratory activity on the one hand and spatial strategy and memory on the other appears more complex than initially suggested by cognitive map theory.
Spectral Re-Growth Reduction for CCSDS 8-D 8-PSK TCM
NASA Technical Reports Server (NTRS)
Borah, Deva K.
2002-01-01
This report presents a study on the CCSDS recommended 8-dimensional 8 PSK Trellis Coded Modulation (TCM) scheme. The important steps of the CCSDS scheme include: conversion of serial data into parallel form, differential encoding, convolutional encoding, constellation mapping, and filtering the 8-PSK symbols using the square root raised cosine (SRRC) pulses. The last step, namely the filtering of the 8 PSK symbols using SRRC pulses, significantly affects the bandwidth of the signal. If a nonlinear power amplifier is used, the SRRC filtered signal creates spectral regrowth. The purpose of this report is to investigate a technique, called the smooth phase interpolated keying (SPIK), that can provide an alternative to SRRC filtering so that good spectral as well as power efficiencies can be obtained with the CCSDS encoder. The results of this study show that the CCSDS encoder does not affect the spectral shape of the SRRC filtered signal or the SPIK signal. When a nonlinear traveling wave tube amplifier (TWTA) is used, the spectral performance of the SRRC signal degrades significantly while the spectral performance of SPIK remains unaffected. The degrading effect of a nonlinear solid state power amplifier (SSPA) on SRRC is found to be less than that due to a nonlinear TWTA. However, in both cases, the spectral performance of the SRRC modulated signal is worse than that of the SPIK signal. The bit error rate (BER) performance of the SRRC signal in a linear amplifier environment is about 2.5 dB better than that of the SPIK signal when both the receivers use algorithms of similar complexity. In a nonlinear TWTA environment, the SRRC signal requires accurate phase tracking since the TWTA introduces additional phase distortion. This problem does not arise with SPIK signal due to its constant envelope property. When a nonlinear amplifier is used, the SRRC method loses nearly 1 dB in the bit error rate performance. The SPIK signal does not lose any performance. Thus the performance gap between SRRC and SPIK reduces. The BER performance of SPIK can be improved even further by using a more optimal receiver. A similar optimal receiver for SRRC is quite complex since the amplifier distorts the pulse shape. However, this requires further investigation and is not covered in this report.
Hingerl, Lukas; Moser, Philipp; Považan, Michal; Hangel, Gilbert; Heckova, Eva; Gruber, Stephan; Trattnig, Siegfried; Strasser, Bernhard
2017-01-01
Purpose Full‐slice magnetic resonance spectroscopic imaging at ≥7 T is especially vulnerable to lipid contaminations arising from regions close to the skull. This contamination can be mitigated by improving the point spread function via higher spatial resolution sampling and k‐space filtering, but this prolongs scan times and reduces the signal‐to‐noise ratio (SNR) efficiency. Currently applied parallel imaging methods accelerate magnetic resonance spectroscopic imaging scans at 7T, but increase lipid artifacts and lower SNR‐efficiency further. In this study, we propose an SNR‐efficient spatial‐spectral sampling scheme using concentric circle echo planar trajectories (CONCEPT), which was adapted to intrinsically acquire a Hamming‐weighted k‐space, thus termed density‐weighted‐CONCEPT. This minimizes voxel bleeding, while preserving an optimal SNR. Theory and Methods Trajectories were theoretically derived and verified in phantoms as well as in the human brain via measurements of five volunteers (single‐slice, field‐of‐view 220 × 220 mm2, matrix 64 × 64, scan time 6 min) with free induction decay magnetic resonance spectroscopic imaging. Density‐weighted‐CONCEPT was compared to (a) the originally proposed CONCEPT with equidistant circles (here termed e‐CONCEPT), (b) elliptical phase‐encoding, and (c) 5‐fold Controlled Aliasing In Parallel Imaging Results IN Higher Acceleration accelerated elliptical phase‐encoding. Results By intrinsically sampling a Hamming‐weighted k‐space, density‐weighted‐CONCEPT removed Gibbs‐ringing artifacts and had in vivo +9.5%, +24.4%, and +39.7% higher SNR than e‐CONCEPT, elliptical phase‐encoding, and the Controlled Aliasing In Parallel Imaging Results IN Higher Acceleration accelerated elliptical phase‐encoding (all P < 0.05), respectively, which lead to improved metabolic maps. Conclusion Density‐weighted‐CONCEPT provides clinically attractive full‐slice high‐resolution magnetic resonance spectroscopic imaging with optimal SNR at 7T. Magn Reson Med 79:2874–2885, 2018. © 2017 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. PMID:29106742
Dynamical maps, quantum detailed balance, and the Petz recovery map
NASA Astrophysics Data System (ADS)
Alhambra, Álvaro M.; Woods, Mischa P.
2017-08-01
Markovian master equations (formally known as quantum dynamical semigroups) can be used to describe the evolution of a quantum state ρ when in contact with a memoryless thermal bath. This approach has had much success in describing the dynamics of real-life open quantum systems in the laboratory. Such dynamics increase the entropy of the state ρ and the bath until both systems reach thermal equilibrium, at which point entropy production stops. Our main result is to show that the entropy production at time t is bounded by the relative entropy between the original state and the state at time 2 t . The bound puts strong constraints on how quickly a state can thermalize, and we prove that the factor of 2 is tight. The proof makes use of a key physically relevant property of these dynamical semigroups, detailed balance, showing that this property is intimately connected with the field of recovery maps from quantum information theory. We envisage that the connections made here between the two fields will have further applications. We also use this connection to show that a similar relation can be derived when the fixed point is not thermal.
In Darwinian evolution, feedback from natural selection leads to biased mutations.
Caporale, Lynn Helena; Doyle, John
2013-12-01
Natural selection provides feedback through which information about the environment and its recurring challenges is captured, inherited, and accumulated within genomes in the form of variations that contribute to survival. The variation upon which natural selection acts is generally described as "random." Yet evidence has been mounting for decades, from such phenomena as mutation hotspots, horizontal gene transfer, and highly mutable repetitive sequences, that variation is far from the simplifying idealization of random processes as white (uniform in space and time and independent of the environment or context). This paper focuses on what is known about the generation and control of mutational variation, emphasizing that it is not uniform across the genome or in time, not unstructured with respect to survival, and is neither memoryless nor independent of the (also far from white) environment. We suggest that, as opposed to frequentist methods, Bayesian analysis could capture the evolution of nonuniform probabilities of distinct classes of mutation, and argue not only that the locations, styles, and timing of real mutations are not correctly modeled as generated by a white noise random process, but that such a process would be inconsistent with evolutionary theory. © 2013 New York Academy of Sciences.
Multi-stage decoding for multi-level block modulation codes
NASA Technical Reports Server (NTRS)
Lin, Shu
1991-01-01
In this paper, we investigate various types of multi-stage decoding for multi-level block modulation codes, in which the decoding of a component code at each stage can be either soft-decision or hard-decision, maximum likelihood or bounded-distance. Error performance of codes is analyzed for a memoryless additive channel based on various types of multi-stage decoding, and upper bounds on the probability of an incorrect decoding are derived. Based on our study and computation results, we find that, if component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. In particular, we find that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum decoding of the overall code is very small: only a fraction of dB loss in SNR at the probability of an incorrect decoding for a block of 10(exp -6). Multi-stage decoding of multi-level modulation codes really offers a way to achieve the best of three worlds, bandwidth efficiency, coding gain, and decoding complexity.
NMR signals within the generalized Langevin model for fractional Brownian motion
NASA Astrophysics Data System (ADS)
Lisý, Vladimír; Tóthová, Jana
2018-03-01
The methods of Nuclear Magnetic Resonance belong to the best developed and often used tools for studying random motion of particles in different systems, including soft biological tissues. In the long-time limit the current mathematical description of the experiments allows proper interpretation of measurements of normal and anomalous diffusion. The shorter-time dynamics is however correctly considered only in a few works that do not go beyond the standard memoryless Langevin description of the Brownian motion (BM). In the present work, the attenuation function S (t) for an ensemble of spin-bearing particles in a magnetic-field gradient, expressed in a form applicable for any kind of stationary stochastic dynamics of spins with or without a memory, is calculated in the frame of the model of fractional BM. The solution of the model for particles trapped in a harmonic potential is obtained in an exceedingly simple way and used for the calculation of S (t). In the limit of free particles coupled to a fractal heat bath, the results compare favorably with experiments acquired in human neuronal tissues. The effect of the trap is demonstrated by introducing a simple model for the generalized diffusion coefficient of the particle.
Chaotic Traversal (CHAT): Very Large Graphs Traversal Using Chaotic Dynamics
NASA Astrophysics Data System (ADS)
Changaival, Boonyarit; Rosalie, Martin; Danoy, Grégoire; Lavangnananda, Kittichai; Bouvry, Pascal
2017-12-01
Graph Traversal algorithms can find their applications in various fields such as routing problems, natural language processing or even database querying. The exploration can be considered as a first stepping stone into knowledge extraction from the graph which is now a popular topic. Classical solutions such as Breadth First Search (BFS) and Depth First Search (DFS) require huge amounts of memory for exploring very large graphs. In this research, we present a novel memoryless graph traversal algorithm, Chaotic Traversal (CHAT) which integrates chaotic dynamics to traverse large unknown graphs via the Lozi map and the Rössler system. To compare various dynamics effects on our algorithm, we present an original way to perform the exploration of a parameter space using a bifurcation diagram with respect to the topological structure of attractors. The resulting algorithm is an efficient and nonresource demanding algorithm, and is therefore very suitable for partial traversal of very large and/or unknown environment graphs. CHAT performance using Lozi map is proven superior than the, commonly known, Random Walk, in terms of number of nodes visited (coverage percentage) and computation time where the environment is unknown and memory usage is restricted.
NASA Astrophysics Data System (ADS)
Vicente, Renato; de Toledo, Charles M.; Leite, Vitor B. P.; Caticha, Nestor
2006-02-01
We investigate the Heston model with stochastic volatility and exponential tails as a model for the typical price fluctuations of the Brazilian São Paulo Stock Exchange Index (IBOVESPA). Raw prices are first corrected for inflation and a period spanning 15 years characterized by memoryless returns is chosen for the analysis. Model parameters are estimated by observing volatility scaling and correlation properties. We show that the Heston model with at least two time scales for the volatility mean reverting dynamics satisfactorily describes price fluctuations ranging from time scales larger than 20 min to 160 days. At time scales shorter than 20 min we observe autocorrelated returns and power law tails incompatible with the Heston model. Despite major regulatory changes, hyperinflation and currency crises experienced by the Brazilian market in the period studied, the general success of the description provided may be regarded as an evidence for a general underlying dynamics of price fluctuations at intermediate mesoeconomic time scales well approximated by the Heston model. We also notice that the connection between the Heston model and Ehrenfest urn models could be exploited for bringing new insights into the microeconomic market mechanics.
Position-based coding and convex splitting for private communication over quantum channels
NASA Astrophysics Data System (ADS)
Wilde, Mark M.
2017-10-01
The classical-input quantum-output (cq) wiretap channel is a communication model involving a classical sender X, a legitimate quantum receiver B, and a quantum eavesdropper E. The goal of a private communication protocol that uses such a channel is for the sender X to transmit a message in such a way that the legitimate receiver B can decode it reliably, while the eavesdropper E learns essentially nothing about which message was transmitted. The ɛ -one-shot private capacity of a cq wiretap channel is equal to the maximum number of bits that can be transmitted over the channel, such that the privacy error is no larger than ɛ \\in (0,1). The present paper provides a lower bound on the ɛ -one-shot private classical capacity, by exploiting the recently developed techniques of Anshu, Devabathini, Jain, and Warsi, called position-based coding and convex splitting. The lower bound is equal to a difference of the hypothesis testing mutual information between X and B and the "alternate" smooth max-information between X and E. The one-shot lower bound then leads to a non-trivial lower bound on the second-order coding rate for private classical communication over a memoryless cq wiretap channel.
Spectral decomposition of nonlinear systems with memory
NASA Astrophysics Data System (ADS)
Svenkeson, Adam; Glaz, Bryan; Stanton, Samuel; West, Bruce J.
2016-02-01
We present an alternative approach to the analysis of nonlinear systems with long-term memory that is based on the Koopman operator and a Lévy transformation in time. Memory effects are considered to be the result of interactions between a system and its surrounding environment. The analysis leads to the decomposition of a nonlinear system with memory into modes whose temporal behavior is anomalous and lacks a characteristic scale. On average, the time evolution of a mode follows a Mittag-Leffler function, and the system can be described using the fractional calculus. The general theory is demonstrated on the fractional linear harmonic oscillator and the fractional nonlinear logistic equation. When analyzing data from an ill-defined (black-box) system, the spectral decomposition in terms of Mittag-Leffler functions that we propose may uncover inherent memory effects through identification of a small set of dynamically relevant structures that would otherwise be obscured by conventional spectral methods. Consequently, the theoretical concepts we present may be useful for developing more general methods for numerical modeling that are able to determine whether observables of a dynamical system are better represented by memoryless operators, or operators with long-term memory in time, when model details are unknown.
DNA polymerase ι: The long and the short of it!
Frank, Ekaterina G; McLenigan, Mary P; McDonald, John P; Huston, Donald; Mead, Samantha; Woodgate, Roger
2017-10-01
The cDNA encoding human DNA polymerase ι (POLI) was cloned in 1999. At that time, it was believed that the POLI gene encoded a protein of 715 amino acids. Advances in DNA sequencing technologies led to the realization that there is an upstream, in-frame initiation codon that would encode a DNA polymerase ι (polι) protein of 740 amino acids. The extra 25 amino acid region is rich in acidic residues (11/25) and is reasonably conserved in eukaryotes ranging from fish to humans. As a consequence, the curated Reference Sequence (RefSeq) database identified polι as a 740 amino acid protein. However, the existence of the 740 amino acid polι has never been shown experimentally. Using highly specific antibodies to the 25 N-terminal amino acids of polι, we were unable to detect the longer 740 amino acid (ι-long) isoform in western blots. However, trace amounts of the ι-long isoform were detected after enrichment by immunoprecipitation. One might argue that the longer isoform may have a distinct biological function, if it exhibits significant differences in its enzymatic properties from the shorter, well-characterized 715 amino acid polι. We therefore purified and characterized recombinant full-length (740 amino acid) polι-long and compared it to full-length (715 amino acid) polι-short in vitro. The metal ion requirements for optimal catalytic activity differ slightly between ι-long and ι-short, but under optimal conditions, both isoforms exhibit indistinguishable enzymatic properties in vitro. We also report that like ι-short, the ι-long isoform can be monoubiquitinated and polyubiuquitinated in vivo, as well as form damage induced foci in vivo. We conclude that the predominant isoform of DNA polι in human cells is the shorter 715 amino acid protein and that if, or when, expressed, the longer 740 amino acid isoform has identical properties to the considerably more abundant shorter isoform. Published by Elsevier B.V.
NEREC, an effective brain mapping protocol for combined language and long-term memory functions.
Perrone-Bertolotti, Marcela; Girard, Cléa; Cousin, Emilie; Vidal, Juan Ricardo; Pichat, Cédric; Kahane, Philippe; Baciu, Monica
2015-12-01
Temporal lobe epilepsy can induce functional plasticity in temporoparietal networks involved in language and long-term memory processing. Previous studies in healthy subjects have revealed the relative difficulty for this network to respond effectively across different experimental designs, as compared to more reactive regions such as frontal lobes. For a protocol to be optimal for clinical use, it has to first show robust effects in a healthy cohort. In this study, we developed a novel experimental paradigm entitled NEREC, which is able to reveal the robust participation of temporoparietal networks in a uniquely combined language and memory task, validated in an fMRI study with healthy subjects. Concretely, NEREC is composed of two runs: (a) an intermixed language-memory task (confrontation naming associated with encoding in nonverbal items, NE) to map language (i.e., word retrieval and lexico-semantic processes) combined with simultaneous long-term verbal memory encoding (NE items named but also explicitly memorized) and (b) a memory retrieval task of items encoded during NE (word recognition, REC) intermixed with new items. Word recognition is based on both perceptual-semantic familiarity (feeling of 'know') and accessing stored memory representations (remembering). In order to maximize the remembering and recruitment of medial temporal lobe structures, we increased REC difficulty by changing the modality of stimulus presentation (from nonverbal during NE to verbal during REC). We report that (a) temporoparietal activation during NE was attributable to both lexico-semantic (language) and memory (episodic encoding and semantic retrieval) processes; that (b) encoding activated the left hippocampus, bilateral fusiform, and bilateral inferior temporal gyri; and that (c) task recognition (recollection) activated the right hippocampus and bilateral but predominant left fusiform gyrus. The novelty of this protocol consists of (a) combining two tasks in one (language and long-term memory encoding/recall) instead of applying isolated tasks to map temporoparietal regions, (b) analyzing NE data based on performances recorded during REC, (c) double-mapping networks involved in naming and in long-term memory encoding and retrieval, (d) focusing on remembering with hippocampal activation and familiarity judgment with lateral temporal cortices activation, and (e) short duration of examination and feasibility. These aspects are of particular interest in patients with TLE, who frequently show impairment of these cognitive functions. Here, we show that the novel protocol is suited for this clinical evaluation. Copyright © 2015 Elsevier Inc. All rights reserved.
Moerel, Michelle; De Martino, Federico; Kemper, Valentin G; Schmitter, Sebastian; Vu, An T; Uğurbil, Kâmil; Formisano, Elia; Yacoub, Essa
2018-01-01
Following rapid technological advances, ultra-high field functional MRI (fMRI) enables exploring correlates of neuronal population activity at an increasing spatial resolution. However, as the fMRI blood-oxygenation-level-dependent (BOLD) contrast is a vascular signal, the spatial specificity of fMRI data is ultimately determined by the characteristics of the underlying vasculature. At 7T, fMRI measurement parameters determine the relative contribution of the macro- and microvasculature to the acquired signal. Here we investigate how these parameters affect relevant high-end fMRI analyses such as encoding, decoding, and submillimeter mapping of voxel preferences in the human auditory cortex. Specifically, we compare a T 2 * weighted fMRI dataset, obtained with 2D gradient echo (GE) EPI, to a predominantly T 2 weighted dataset obtained with 3D GRASE. We first investigated the decoding accuracy based on two encoding models that represented different hypotheses about auditory cortical processing. This encoding/decoding analysis profited from the large spatial coverage and sensitivity of the T 2 * weighted acquisitions, as evidenced by a significantly higher prediction accuracy in the GE-EPI dataset compared to the 3D GRASE dataset for both encoding models. The main disadvantage of the T 2 * weighted GE-EPI dataset for encoding/decoding analyses was that the prediction accuracy exhibited cortical depth dependent vascular biases. However, we propose that the comparison of prediction accuracy across the different encoding models may be used as a post processing technique to salvage the spatial interpretability of the GE-EPI cortical depth-dependent prediction accuracy. Second, we explored the mapping of voxel preferences. Large-scale maps of frequency preference (i.e., tonotopy) were similar across datasets, yet the GE-EPI dataset was preferable due to its larger spatial coverage and sensitivity. However, submillimeter tonotopy maps revealed biases in assigned frequency preference and selectivity for the GE-EPI dataset, but not for the 3D GRASE dataset. Thus, a T 2 weighted acquisition is recommended if high specificity in tonotopic maps is required. In conclusion, different fMRI acquisitions were better suited for different analyses. It is therefore critical that any sequence parameter optimization considers the eventual intended fMRI analyses and the nature of the neuroscience questions being asked. Copyright © 2017 Elsevier Inc. All rights reserved.
2014-01-01
Background Heterologous gene expression is an important tool for synthetic biology that enables metabolic engineering and the production of non-natural biologics in a variety of host organisms. The translational efficiency of heterologous genes can often be improved by optimizing synonymous codon usage to better match the host organism. However, traditional approaches for optimization neglect to take into account many factors known to influence synonymous codon distributions. Results Here we define an alternative approach for codon optimization that utilizes systems level information and codon context for the condition under which heterologous genes are being expressed. Furthermore, we utilize a probabilistic algorithm to generate multiple variants of a given gene. We demonstrate improved translational efficiency using this condition-specific codon optimization approach with two heterologous genes, the fluorescent protein-encoding eGFP and the catechol 1,2-dioxygenase gene CatA, expressed in S. cerevisiae. For the latter case, optimization for stationary phase production resulted in nearly 2.9-fold improvements over commercial gene optimization algorithms. Conclusions Codon optimization is now often a standard tool for protein expression, and while a variety of tools and approaches have been developed, they do not guarantee improved performance for all hosts of applications. Here, we suggest an alternative method for condition-specific codon optimization and demonstrate its utility in Saccharomyces cerevisiae as a proof of concept. However, this technique should be applicable to any organism for which gene expression data can be generated and is thus of potential interest for a variety of applications in metabolic and cellular engineering. PMID:24636000
Intermodal Attention Shifts in Multimodal Working Memory.
Katus, Tobias; Grubert, Anna; Eimer, Martin
2017-04-01
Attention maintains task-relevant information in working memory (WM) in an active state. We investigated whether the attention-based maintenance of stimulus representations that were encoded through different modalities is flexibly controlled by top-down mechanisms that depend on behavioral goals. Distinct components of the ERP reflect the maintenance of tactile and visual information in WM. We concurrently measured tactile (tCDA) and visual contralateral delay activity (CDA) to track the attentional activation of tactile and visual information during multimodal WM. Participants simultaneously received tactile and visual sample stimuli on the left and right sides and memorized all stimuli on one task-relevant side. After 500 msec, an auditory retrocue indicated whether the sample set's tactile or visual content had to be compared with a subsequent test stimulus set. tCDA and CDA components that emerged simultaneously during the encoding phase were consistently reduced after retrocues that marked the corresponding (tactile or visual) modality as task-irrelevant. The absolute size of cue-dependent modulations was similar for the tCDA/CDA components and did not depend on the number of tactile/visual stimuli that were initially encoded into WM. Our results suggest that modality-specific maintenance processes in sensory brain regions are flexibly modulated by top-down influences that optimize multimodal WM representations for behavioral goals.
Sharif, Behzad; Derbyshire, J. Andrew; Faranesh, Anthony Z.; Bresler, Yoram
2010-01-01
MR imaging of the human heart without explicit cardiac synchronization promises to extend the applicability of cardiac MR to a larger patient population and potentially expand its diagnostic capabilities. However, conventional non-gated imaging techniques typically suffer from low image quality or inadequate spatio-temporal resolution and fidelity. Patient-Adaptive Reconstruction and Acquisition in Dynamic Imaging with Sensitivity Encoding (PARADISE) is a highly-accelerated non-gated dynamic imaging method that enables artifact-free imaging with high spatio-temporal resolutions by utilizing novel computational techniques to optimize the imaging process. In addition to using parallel imaging, the method gains acceleration from a physiologically-driven spatio-temporal support model; hence, it is doubly accelerated. The support model is patient-adaptive, i.e., its geometry depends on dynamics of the imaged slice, e.g., subject’s heart-rate and heart location within the slice. The proposed method is also doubly adaptive as it adapts both the acquisition and reconstruction schemes. Based on the theory of time-sequential sampling, the proposed framework explicitly accounts for speed limitations of gradient encoding and provides performance guarantees on achievable image quality. The presented in-vivo results demonstrate the effectiveness and feasibility of the PARADISE method for high resolution non-gated cardiac MRI during a short breath-hold. PMID:20665794
Efficient spectroscopic imaging by an optimized encoding of pre-targeted resonances
Zhang, Zhiyong; Shemesh, Noam; Frydman, Lucio
2016-01-01
A “relaxation-enhanced” (RE) selective-excitation approach to acquire in vivo localized spectra with flat baselines and very good signal-to-noise ratios –particularly at high fields– has been recently proposed. As RE MRS targets a subset of a priori known resonances, new possibilities arise to acquire spectroscopic imaging data in a faster, more efficient manner. Hereby we present one such opportunity based on what we denominate Relaxation-Enhanced Chemical-shift-Encoded Spectroscopically-Separated (RECESS) imaging. RECESS delivers spectral/spatial correlations of various metabolites, by collecting a gradient echo train whose timing is defined by the chemical shifts of the various selectively excited resonances to be disentangled. Different sites thus impart distinct, coherent phase modulations on the images; condition number considerations allow one to disentangle these contributions of the various sites by a simple matrix inversion. The efficiency of the ensuing spectral/spatial correlation method is high enough to enable the examination of additional spatial axes via their phase encoding in CPMG-like spin-echo trains. The ensuing single-shot 1D spectral / 2D spatial RECESS method thus accelerates the acquisition of quality MRSI data by factors that, depending on the sensitivity, range between 2 and 50. This is illustrated with a number of phantom, of ex vivo and of in vivo acquisitions. PMID:26910285
Continuous in vitro evolution of bacteriophage RNA polymerase promoters
NASA Technical Reports Server (NTRS)
Breaker, R. R.; Banerji, A.; Joyce, G. F.
1994-01-01
Rapid in vitro evolution of bacteriophage T7, T3, and SP6 RNA polymerase promoters was achieved by a method that allows continuous enrichment of DNAs that contain functional promoter elements. This method exploits the ability of a special class of nucleic acid molecules to replicate continuously in the presence of both a reverse transcriptase and a DNA-dependent RNA polymerase. Replication involves the synthesis of both RNA and cDNA intermediates. The cDNA strand contains an embedded promoter sequence, which becomes converted to a functional double-stranded promoter element, leading to the production of RNA transcripts. Synthetic cDNAs, including those that contain randomized promoter sequences, can be used to initiate the amplification cycle. However, only those cDNAs that contain functional promoter sequences are able to produce RNA transcripts. Furthermore, each RNA transcript encodes the RNA polymerase promoter sequence that was responsible for initiation of its own transcription. Thus, the population of amplifying molecules quickly becomes enriched for those templates that encode functional promoters. Optimal promoter sequences for phage T7, T3, and SP6 RNA polymerase were identified after a 2-h amplification reaction, initiated in each case with a pool of synthetic cDNAs encoding greater than 10(10) promoter sequence variants.
Seasonal Plasticity of Precise Spike Timing in the Avian Auditory System
Sen, Kamal; Rubel, Edwin W; Brenowitz, Eliot A.
2015-01-01
Vertebrate audition is a dynamic process, capable of exhibiting both short- and long-term adaptations to varying listening conditions. Precise spike timing has long been known to play an important role in auditory encoding, but its role in sensory plasticity remains largely unexplored. We addressed this issue in Gambel's white-crowned sparrow (Zonotrichia leucophrys gambelii), a songbird that shows pronounced seasonal fluctuations in circulating levels of sex-steroid hormones, which are known to be potent neuromodulators of auditory function. We recorded extracellular single-unit activity in the auditory forebrain of males and females under different breeding conditions and used a computational approach to explore two potential strategies for the neural discrimination of sound level: one based on spike counts and one based on spike timing reliability. We report that breeding condition has robust sex-specific effects on spike timing. Specifically, in females, breeding condition increases the proportion of cells that rely solely on spike timing information and increases the temporal resolution required for optimal intensity encoding. Furthermore, in a functionally distinct subset of cells that are particularly well suited for amplitude encoding, female breeding condition enhances spike timing-based discrimination accuracy. No effects of breeding condition were observed in males. Our results suggest that high-resolution temporal discharge patterns may provide a plastic neural substrate for sensory coding. PMID:25716843
Optimization of a Boiling Water Reactor Loading Pattern Using an Improved Genetic Algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kobayashi, Yoko; Aiyoshi, Eitaro
2003-08-15
A search method based on genetic algorithms (GA) using deterministic operators has been developed to generate optimized boiling water reactor (BWR) loading patterns (LPs). The search method uses an Improved GA operator, that is, crossover, mutation, and selection. The handling of the encoding technique and constraint conditions is designed so that the GA reflects the peculiar characteristics of the BWR. In addition, some strategies such as elitism and self-reproduction are effectively used to improve the search speed. LP evaluations were performed with a three-dimensional diffusion code that coupled neutronic and thermal-hydraulic models. Strong axial heterogeneities and three-dimensional-dependent constraints have alwaysmore » necessitated the use of three-dimensional core simulators for BWRs, so that an optimization method is required for computational efficiency. The proposed algorithm is demonstrated by successfully generating LPs for an actual BWR plant applying the Haling technique. In test calculations, candidates that shuffled fresh and burned fuel assemblies within a reasonable computation time were obtained.« less
Foong, Shaohui; Sun, Zhenglong
2016-08-12
In this paper, a novel magnetic field-based sensing system employing statistically optimized concurrent multiple sensor outputs for precise field-position association and localization is presented. This method capitalizes on the independence between simultaneous spatial field measurements at multiple locations to induce unique correspondences between field and position. This single-source-multi-sensor configuration is able to achieve accurate and precise localization and tracking of translational motion without contact over large travel distances for feedback control. Principal component analysis (PCA) is used as a pseudo-linear filter to optimally reduce the dimensions of the multi-sensor output space for computationally efficient field-position mapping with artificial neural networks (ANNs). Numerical simulations are employed to investigate the effects of geometric parameters and Gaussian noise corruption on PCA assisted ANN mapping performance. Using a 9-sensor network, the sensing accuracy and closed-loop tracking performance of the proposed optimal field-based sensing system is experimentally evaluated on a linear actuator with a significantly more expensive optical encoder as a comparison.
Mini-batch optimized full waveform inversion with geological constrained gradient filtering
NASA Astrophysics Data System (ADS)
Yang, Hui; Jia, Junxiong; Wu, Bangyu; Gao, Jinghuai
2018-05-01
High computation cost and generating solutions without geological sense have hindered the wide application of Full Waveform Inversion (FWI). Source encoding technique is a way to dramatically reduce the cost of FWI but subject to fix-spread acquisition setup requirement and slow convergence for the suppression of cross-talk. Traditionally, gradient regularization or preconditioning is applied to mitigate the ill-posedness. An isotropic smoothing filter applied on gradients generally gives non-geological inversion results, and could also introduce artifacts. In this work, we propose to address both the efficiency and ill-posedness of FWI by a geological constrained mini-batch gradient optimization method. The mini-batch gradient descent optimization is adopted to reduce the computation time by choosing a subset of entire shots for each iteration. By jointly applying the structure-oriented smoothing to the mini-batch gradient, the inversion converges faster and gives results with more geological meaning. Stylized Marmousi model is used to show the performance of the proposed method on realistic synthetic model.
Jasniewski, Jordane; Cailliez-Grimal, Catherine; Gelhaye, Eric; Revol-Junelles, Anne-Marie
2008-04-01
An optimization of the production and purification processes of carnobacteriocins Cbn BM1 and Cbn B2 from Carnobacterium maltaromaticum CP5, by heterologous expression in Escherichia coli is described. The genes encoding mature bacteriocin were cloned into an E. coli expression system and expressed as a fusion protein with a thermostable thioredoxin. Recombinant E. coli were cultivated following a fed-batch fermentation process with pH, temperature and oxygenation regulation. The overexpression of the fusion proteins was improved by replacing IPTG by lactose. The fusion proteins were purified by thermal coagulation followed by affinity chromatography. The thioredoxin fusion protein was removed by using CNBr instead of enterokinase and the carnobacteriocins were recovered by reverse-phase chromatography. These optimizations led us to produce up to 320 mg of pure protein per liter of culture, which is four to ten fold higher than what is described for other heterologous expression systems.
Liu, Binyan; Gu, Shiling; Liang, Nengsong; Xiong, Mei; Xue, Qizhen; Lu, Shuguang; Hu, Fuquan; Zhang, Huidong
2016-08-01
Most phages contain DNA polymerases, which are essential for DNA replication and propagation in infected host bacteria. However, our knowledge on phage-encoded DNA polymerases remains limited. This study investigated the function of a novel DNA polymerase of PaP1, which is the lytic phage of Pseudomonas aeruginosa. PaP1 encodes its sole DNA polymerase called Gp90 that was predicted as an A-family DNA polymerase with polymerase and 3'-5' exonuclease activities. The sequence of Gp90 is homologous but not identical to that of other A-family DNA polymerases, such as T7 DNA polymerases (Pol) and DNA Pol I. The purified Gp90 demonstrated a polymerase activity. The processivity of Gp90 in DNA replication and its efficiency in single-dNTP incorporation are similar to those of T7 Pol with processive thioredoxin (T7 Pol/trx). Gp90 can degrade ssDNA and dsDNA in 3'-5' direction at a similar rate, which is considerably lower than that of T7 Pol/trx. The optimized conditions for polymerization were a temperature of 37 °C and a buffer consisting of 40 mM Tris-HCl (pH 8.0), 30 mM MgCl2, and 200 mM NaCl. These studies on DNA polymerase encoded by PaP1 help advance our knowledge on phage-encoded DNA polymerases and elucidate PaP1 propagation in infected P. aeruginosa.
2012-01-01
Background Natrialba magadii is an aerobic chemoorganotrophic member of the Euryarchaeota and is a dual extremophile requiring alkaline conditions and hypersalinity for optimal growth. The genome sequence of Nab. magadii type strain ATCC 43099 was deciphered to obtain a comprehensive insight into the genetic content of this haloarchaeon and to understand the basis of some of the cellular functions necessary for its survival. Results The genome of Nab. magadii consists of four replicons with a total sequence of 4,443,643 bp and encodes 4,212 putative proteins, some of which contain peptide repeats of various lengths. Comparative genome analyses facilitated the identification of genes encoding putative proteins involved in adaptation to hypersalinity, stress response, glycosylation, and polysaccharide biosynthesis. A proton-driven ATP synthase and a variety of putative cytochromes and other proteins supporting aerobic respiration and electron transfer were encoded by one or more of Nab. magadii replicons. The genome encodes a number of putative proteases/peptidases as well as protein secretion functions. Genes encoding putative transcriptional regulators, basal transcription factors, signal perception/transduction proteins, and chemotaxis/phototaxis proteins were abundant in the genome. Pathways for the biosynthesis of thiamine, riboflavin, heme, cobalamin, coenzyme F420 and other essential co-factors were deduced by in depth sequence analyses. However, approximately 36% of Nab. magadii protein coding genes could not be assigned a function based on Blast analysis and have been annotated as encoding hypothetical or conserved hypothetical proteins. Furthermore, despite extensive comparative genomic analyses, genes necessary for survival in alkaline conditions could not be identified in Nab. magadii. Conclusions Based on genomic analyses, Nab. magadii is predicted to be metabolically versatile and it could use different carbon and energy sources to sustain growth. Nab. magadii has the genetic potential to adapt to its milieu by intracellular accumulation of inorganic cations and/or neutral organic compounds. The identification of Nab. magadii genes involved in coenzyme biosynthesis is a necessary step toward further reconstruction of the metabolic pathways in halophilic archaea and other extremophiles. The knowledge gained from the genome sequence of this haloalkaliphilic archaeon is highly valuable in advancing the applications of extremophiles and their enzymes. PMID:22559199
Xu, Yanbing; Zheng, Zhaojuan; Xu, Qianqian; Yong, Qiang; Ouyang, Jia
2016-03-30
Inulooligosaccharides (IOS) represent an important class of oligosaccharides at industrial scale. An efficient conversion of inulin to IOS through endoinulinase from Aspergillus niger is presented. A 1482 bp codon optimized gene fragment encoding endoinulinase from A. niger DSM 2466 was cloned into pPIC9K vector and was transformed into Pichia pastoris KM71. Maximum activity of the recombinant endoinulinase, 858 U/mL, was obtained at 120 h of the high cell density fermentation process. The optimal conditions for inulin hydrolysis using the recombinant endoinulinase were investigated. IOS were harvested with a high concentration of 365.1 g/L and high yield up to 91.3%. IOS with different degrees of polymerization (DP, mainly DP 3-6) were distributed in the final reaction products.
Adaptive quantization-parameter clip scheme for smooth quality in H.264/AVC.
Hu, Sudeng; Wang, Hanli; Kwong, Sam
2012-04-01
In this paper, we investigate the issues over the smooth quality and the smooth bit rate during rate control (RC) in H.264/AVC. An adaptive quantization-parameter (Q(p)) clip scheme is proposed to optimize the quality smoothness while keeping the bit-rate fluctuation at an acceptable level. First, the frame complexity variation is studied by defining a complexity ratio between two nearby frames. Second, the range of the generated bits is analyzed to prevent the encoder buffer from overflow and underflow. Third, based on the safe range of the generated bits, an optimal Q(p) clip range is developed to reduce the quality fluctuation. Experimental results demonstrate that the proposed Q(p) clip scheme can achieve excellent performance in quality smoothness and buffer regulation.
Rolling Circle Transcription of Ribozymes Targeted to ras and mdr-1
2001-09-01
ssDNA) to direct transcription of an tion-PCR, and recyclization were carried out to optimize active hammerhead ribozyme in E. coli cells. transcription...transcription I hammerhead ribozyme I in vitro selection and 12.5 units/ml RNase inhibitor (Promega), in a total reaction volume of 15 tk1. After a...sequence encoding a ssDNA, and splint ssDNA were ethanol-precipitated and used as hammerhead ribozyme . templates to begin the next round of in vitro
An Optimal Dissipative Encoder for the Toric Code
2014-01-16
Topological quantummemory J. Math. Phys. 43 4452–505 [6] Diehl S, Micheli A, Kantian A, Kraus B, Büchler H P and Zoller P 2008 Quantum states and phases in...Diehl S, Kantian A, Micheli A and Zoller P 2008 Preparation of entangled states by quantum Markov processes Phys. Rev. A 78 042307 [12] Marvian I 2013...Information Theory (Cambridge: Cambridge University Press) [20] Wolf M and Cirac J I 2008 Dividing quantum channels Commun. Math. Phys. 279 147 11
Optimal erasure protection for scalably compressed video streams with limited retransmission.
Taubman, David; Thie, Johnson
2005-08-01
This paper shows how the priority encoding transmission (PET) framework may be leveraged to exploit both unequal error protection and limited retransmission for RD-optimized delivery of streaming media. Previous work on scalable media protection with PET has largely ignored the possibility of retransmission. Conversely, the PET framework has not been harnessed by the substantial body of previous work on RD optimized hybrid forward error correction/automatic repeat request schemes. We limit our attention to sources which can be modeled as independently compressed frames (e.g., video frames), where each element in the scalable representation of each frame can be transmitted in one or both of two transmission slots. An optimization algorithm determines the level of protection which should be assigned to each element in each slot, subject to transmission bandwidth constraints. To balance the protection assigned to elements which are being transmitted for the first time with those which are being retransmitted, the proposed algorithm formulates a collection of hypotheses concerning its own behavior in future transmission slots. We show how the PET framework allows for a decoupled optimization algorithm with only modest complexity. Experimental results obtained with Motion JPEG2000 compressed video demonstrate that substantial performance benefits can be obtained using the proposed framework.
NASA Astrophysics Data System (ADS)
Xu, Xibao; Zhang, Jianming; Zhou, Xiaojian
2006-10-01
This paper presents a model integrating GIS, cellular automata (CA) and genetic algorithm (GA) in urban spatial optimization. The model involves three objectives of the maximization of land-use efficiency, the maximization of urban spatial harmony and appropriate proportion of each land-use type. CA submodel is designed with standard Moore neighbor and three transition rules to maximize the land-use efficiency and urban spatial harmony, according to the land-use suitability and spatial harmony index. GA submodel is designed with four constraints and seven steps for the maximization of urban spatial harmony and appropriate proportion of each land-use type, including encoding, initializing, calculating fitness, selection, crossover, mutation and elitism. GIS is used to prepare for the input data sets for the model and perform spatial analysis on the results, while CA and GA are integrated to optimize urban spatial structure, programmed with Matlab 7 and coupled with GIS loosely. Lanzhou, a typical valley-basin city with fast urban development, is chosen as the case study. At the end, a detail analysis and evaluation of the spatial optimization with the model are made, and it proves to be a powerful tool in optimizing urban spatial structure and make supplement for urban planning and policy-makers.
Eliciting naturalistic cortical responses with a sensory prosthesis via optimized microstimulation
NASA Astrophysics Data System (ADS)
Choi, John S.; Brockmeier, Austin J.; McNiel, David B.; von Kraus, Lee M.; Príncipe, José C.; Francis, Joseph T.
2016-10-01
Objective. Lost sensations, such as touch, could one day be restored by electrical stimulation along the sensory neural pathways. Such stimulation, when informed by electronic sensors, could provide naturalistic cutaneous and proprioceptive feedback to the user. Perceptually, microstimulation of somatosensory brain regions produces localized, modality-specific sensations, and several spatiotemporal parameters have been studied for their discernibility. However, systematic methods for encoding a wide array of naturally occurring stimuli into biomimetic percepts via multi-channel microstimulation are lacking. More specifically, generating spatiotemporal patterns for explicitly evoking naturalistic neural activation has not yet been explored. Approach. We address this problem by first modeling the dynamical input-output relationship between multichannel microstimulation and downstream neural responses, and then optimizing the input pattern to reproduce naturally occurring touch responses as closely as possible. Main results. Here we show that such optimization produces responses in the S1 cortex of the anesthetized rat that are highly similar to natural, tactile-stimulus-evoked counterparts. Furthermore, information on both pressure and location of the touch stimulus was found to be highly preserved. Significance. Our results suggest that the currently presented stimulus optimization approach holds great promise for restoring naturalistic levels of sensation.
Applications of wavelet-based compression to multidimensional Earth science data
NASA Technical Reports Server (NTRS)
Bradley, Jonathan N.; Brislawn, Christopher M.
1993-01-01
A data compression algorithm involving vector quantization (VQ) and the discrete wavelet transform (DWT) is applied to two different types of multidimensional digital earth-science data. The algorithms (WVQ) is optimized for each particular application through an optimization procedure that assigns VQ parameters to the wavelet transform subbands subject to constraints on compression ratio and encoding complexity. Preliminary results of compressing global ocean model data generated on a Thinking Machines CM-200 supercomputer are presented. The WVQ scheme is used in both a predictive and nonpredictive mode. Parameters generated by the optimization algorithm are reported, as are signal-to-noise (SNR) measurements of actual quantized data. The problem of extrapolating hydrodynamic variables across the continental landmasses in order to compute the DWT on a rectangular grid is discussed. Results are also presented for compressing Landsat TM 7-band data using the WVQ scheme. The formulation of the optimization problem is presented along with SNR measurements of actual quantized data. Postprocessing applications are considered in which the seven spectral bands are clustered into 256 clusters using a k-means algorithm and analyzed using the Los Alamos multispectral data analysis program, SPECTRUM, both before and after being compressed using the WVQ program.
Bilevel Model-Based Discriminative Dictionary Learning for Recognition.
Zhou, Pan; Zhang, Chao; Lin, Zhouchen
2017-03-01
Most supervised dictionary learning methods optimize the combinations of reconstruction error, sparsity prior, and discriminative terms. Thus, the learnt dictionaries may not be optimal for recognition tasks. Also, the sparse codes learning models in the training and the testing phases are inconsistent. Besides, without utilizing the intrinsic data structure, many dictionary learning methods only employ the l 0 or l 1 norm to encode each datum independently, limiting the performance of the learnt dictionaries. We present a novel bilevel model-based discriminative dictionary learning method for recognition tasks. The upper level directly minimizes the classification error, while the lower level uses the sparsity term and the Laplacian term to characterize the intrinsic data structure. The lower level is subordinate to the upper level. Therefore, our model achieves an overall optimality for recognition in that the learnt dictionary is directly tailored for recognition. Moreover, the sparse codes learning models in the training and the testing phases can be the same. We further propose a novel method to solve our bilevel optimization problem. It first replaces the lower level with its Karush-Kuhn-Tucker conditions and then applies the alternating direction method of multipliers to solve the equivalent problem. Extensive experiments demonstrate the effectiveness and robustness of our method.
Glocker, Ben; Paragios, Nikos; Komodakis, Nikos; Tziritas, Georgios; Navab, Nassir
2007-01-01
In this paper we propose a novel non-rigid volume registration based on discrete labeling and linear programming. The proposed framework reformulates registration as a minimal path extraction in a weighted graph. The space of solutions is represented using a set of a labels which are assigned to predefined displacements. The graph topology corresponds to a superimposed regular grid onto the volume. Links between neighborhood control points introduce smoothness, while links between the graph nodes and the labels (end-nodes) measure the cost induced to the objective function through the selection of a particular deformation for a given control point once projected to the entire volume domain, Higher order polynomials are used to express the volume deformation from the ones of the control points. Efficient linear programming that can guarantee the optimal solution up to (a user-defined) bound is considered to recover the optimal registration parameters. Therefore, the method is gradient free, can encode various similarity metrics (simple changes on the graph construction), can guarantee a globally sub-optimal solution and is computational tractable. Experimental validation using simulated data with known deformation, as well as manually segmented data demonstrate the extreme potentials of our approach.
Parallel efficient rate control methods for JPEG 2000
NASA Astrophysics Data System (ADS)
Martínez-del-Amor, Miguel Á.; Bruns, Volker; Sparenberg, Heiko
2017-09-01
Since the introduction of JPEG 2000, several rate control methods have been proposed. Among them, post-compression rate-distortion optimization (PCRD-Opt) is the most widely used, and the one recommended by the standard. The approach followed by this method is to first compress the entire image split in code blocks, and subsequently, optimally truncate the set of generated bit streams according to the maximum target bit rate constraint. The literature proposes various strategies on how to estimate ahead of time where a block will get truncated in order to stop the execution prematurely and save time. However, none of them have been defined bearing in mind a parallel implementation. Today, multi-core and many-core architectures are becoming popular for JPEG 2000 codecs implementations. Therefore, in this paper, we analyze how some techniques for efficient rate control can be deployed in GPUs. In order to do that, the design of our GPU-based codec is extended, allowing stopping the process at a given point. This extension also harnesses a higher level of parallelism on the GPU, leading to up to 40% of speedup with 4K test material on a Titan X. In a second step, three selected rate control methods are adapted and implemented in our parallel encoder. A comparison is then carried out, and used to select the best candidate to be deployed in a GPU encoder, which gave an extra 40% of speedup in those situations where it was really employed.
Pal Roy, Moushree; Datta, Subhabrata; Ghosh, Shilpi
2017-05-01
Bacillus aryabhattai RS1 isolated from rhizosphere produced an extracellular, low temperature active phytase. The cultural conditions for enzyme production were optimized to obtain 35 U mL -1 of activity. Purified phytase had specific activity and molecular weight of 72.97 U mg -1 and ∼40 kDa, respectively. The enzyme was optimally active at pH 6.5 and 40°C and was highly specific to phytate. It exhibited higher catalytic activity at low temperature, retaining over 40% activity at 10°C. Phytase was more thermostable in presence of Ca 2+ ion and retained 100% residual activity on preincubation at 20-50°C for 30 min. Partial phytase encoding gene, phy B (816 bp) was cloned and sequenced. The encoded amino acid sequence (272 aa) contained two conserved motifs, DA[A/T/E]DDPA[I/L/V]W and NN[V/I]D[I/L/V]R[Y/D/Q] of β-propellar phytase and had lower sequence homology with other Bacillus phytases, indicating its novelty. Phytase and the bacterial inoculum were effective in improving germination and growth of chickpea seedlings under phosphate limiting condition. Moreover, the potential applications of the enzyme with relatively high activity at lower temperatures (20-30°C) could also be extended to aquaculture and food processing. © 2017 American Institute of Chemical Engineers Biotechnol. Prog., 33:633-641, 2017. © 2017 American Institute of Chemical Engineers.
Efficient production of antibody Fab fragment by transient gene expression in insect cells.
Mori, Keita; Hamada, Hirotsugu; Ogawa, Takafumi; Ohmuro-Matsuyama, Yuki; Katsuda, Tomohisa; Yamaji, Hideki
2017-08-01
Transient gene expression allows a rapid production of diverse recombinant proteins in early-stage preclinical and clinical developments of biologics. Insect cells have proven to be an excellent platform for the production of functional recombinant proteins. In the present study, the production of an antibody Fab fragment by transient gene expression in lepidopteran insect cells was investigated. The DNA fragments encoding heavy-chain (Hc; Fd fragment) and light-chain (Lc) genes of an Fab fragment were individually cloned into the plasmid vector pIHAneo, which contained the Bombyx mori actin promoter downstream of the B. mori nucleopolyhedrovirus (BmNPV) IE-1 transactivator and the BmNPV HR3 enhancer for high-level expression. Trichoplusia ni BTI-TN-5B1-4 (High Five) cells were co-transfected with the resultant plasmid vectors using linear polyethyleneimine. When the transfection efficiency was evaluated, a plasmid vector encoding an enhanced green fluorescent protein (EGFP) gene was also co-transfected. Transfection and culture conditions were optimized based on both the flow cytometry of the EGFP expression in transfected cells and the yield of the secreted Fab fragments determined by enzyme-linked immunosorbent assay (ELISA). Under optimal conditions, a yield of approximately 120 mg/L of Fab fragments was achieved in 5 days in a shake-flask culture. Transient gene expression in insect cells may offer a promising approach to the high-throughput production of recombinant proteins. Copyright © 2017 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.
High fidelity quantum gates with vibrational qubits.
Berrios, Eduardo; Gruebele, Martin; Shyshlov, Dmytro; Wang, Lei; Babikov, Dmitri
2012-11-26
Physical implementation of quantum gates acting on qubits does not achieve a perfect fidelity of 1. The actual output qubit may not match the targeted output of the desired gate. According to theoretical estimates, intrinsic gate fidelities >99.99% are necessary so that error correction codes can be used to achieve perfect fidelity. Here we test what fidelity can be accomplished for a CNOT gate executed by a shaped ultrafast laser pulse interacting with vibrational states of the molecule SCCl(2). This molecule has been used as a test system for low-fidelity calculations before. To make our test more stringent, we include vibrational levels that do not encode the desired qubits but are close enough in energy to interfere with population transfer by the laser pulse. We use two complementary approaches: optimal control theory determines what the best possible pulse can do; a more constrained physical model calculates what an experiment likely can do. Optimal control theory finds pulses with fidelity >0.9999, in excess of the quantum error correction threshold with 8 × 10(4) iterations. On the other hand, the physical model achieves only 0.9992 after 8 × 10(4) iterations. Both calculations converge as an inverse power law toward unit fidelity after >10(2) iterations/generations. In principle, the fidelities necessary for quantum error correction are reachable with qubits encoded by molecular vibrations. In practice, it will be challenging with current laboratory instrumentation because of slow convergence past fidelities of 0.99.