Science.gov

Sample records for optimal dense coding

  1. MHD Code Optimizations and Jets in Dense Gaseous Halos

    NASA Astrophysics Data System (ADS)

    Gaibler, Volker; Vigelius, Matthias; Krause, Martin; Camenzind, Max

    We have further optimized and extended the 3D-MHD-code NIRVANA. The magnetized part runs in parallel, reaching 19 Gflops per SX-6 node, and has a passively advected particle population. In addition, the code is MPI-parallel now - on top of the shared memory parallelization. On a 512^3 grid, we reach 561 Gflops with 32 nodes on the SX-8. Also, we have successfully used FLASH on the Opteron cluster. Scientific results are preliminary so far. We report one computation of highly resolved cocoon turbulence. While we find some similarities to earlier 2D work by us and others, we note a strange reluctancy of cold material to enter the low density cocoon, which has to be investigated further.

  2. Optimal Dense Coding and Swap Operation Between Two Coupled Electronic Spins: Effects of Nuclear Field and Spin-Orbit Interaction

    NASA Astrophysics Data System (ADS)

    Jiang, Li; Zhang, Guo-Feng

    2016-08-01

    The effects of nuclear field and spin-orbit interaction on dense coding and swap operation are studied in detail for both the antiferromagnetic (AFM) and ferromagnetic (FM) coupling cases. The conditions for a valid dense coding and under which swap operation is feasible are given.

  3. Optimized QKD BB84 protocol using quantum dense coding and CNOT gates: feasibility based on probabilistic optical devices

    NASA Astrophysics Data System (ADS)

    Gueddana, Amor; Attia, Moez; Chatta, Rihab

    2014-05-01

    In this work, we simulate a fiber-based Quantum Key Distribution Protocol (QKDP) BB84 working at the telecoms wavelength 1550 nm with taking into consideration an optimized attack strategy. We consider in our work a quantum channel composed by probabilistic Single Photon Source (SPS), single mode optical Fiber and quantum detector with high efficiency. We show the advantages of using the Quantum Dots (QD) embedded in micro-cavity compared to the Heralded Single Photon Sources (HSPS). Second, we show that Eve is always getting some information depending on the mean photon number per pulse of the used SPS and therefore, we propose an optimized version of the QKDP BB84 based on Quantum Dense Coding (QDC) that could be implemented by quantum CNOT gates. We evaluate the success probability of implementing the optimized QKDP BB84 when using nowadays probabilistic quantum optical devices for circuit realization. We use for our modeling an abstract probabilistic model of a CNOT gate based on linear optical components and having a success probability of sqrt (4/27), we take into consideration the best SPSs realizations, namely the QD and the HSPS, generating a single photon per pulse with a success probability of 0.73 and 0.37, respectively. We show that the protocol is totally secure against attacks but could be correctly implemented only with a success probability of few percent.

  4. DENSE MEDIA CYCLONE OPTIMIZATION

    SciTech Connect

    Gerald H. Luttrell

    2002-04-11

    The test data obtained from the Baseline Assessment that compares the performance of the density traces to that of different sizes of coal particles is now complete. The experimental results show that the tracer data can indeed be used to accurately predict HMC performance. The following conclusions were drawn: (i) the tracer curve is slightly sharper than curve for coarsest size fraction of coal (probably due to the greater resolution of the tracer technique), (ii) the Ep increases with decreasing coal particle size, and (iii) the Ep values are not excessively large for the well-maintained HMC circuits. The major problems discovered were associated with improper apex-to-vortex finder ratios and particle hang-up due to media segregation. Only one plant yielded test data that were typical of a fully optimized level of performance.

  5. DENSE MEDIA CYCLONE OPTIMIZATION

    SciTech Connect

    Gerald H. Luttrell

    2002-01-14

    During the past quarter, float-sink analyses were completed for four of seven circuits evaluated in this project. According to the commercial laboratory, the analyses for the remaining three sites will be finished by mid February 2002. In addition, it was necessary to repeat several of the float-sink tests to resolve problems identified during the analysis of the experimental data. In terms of accomplishments, a website is being prepared to distribute project findings and software to the public. This site will include (i) an operators manual for HMC operation and maintenance (already available in hard copy), (ii) an expert system software package for evaluating and optimizing HMC performance (in development), and (iii) a spreadsheet-based process model for plant designers (in development). Several technology transfer activities were also carried out including the publication of project results in proceedings and the training of plant operations via workshops.

  6. Relating quantum discord with the quantum dense coding capacity

    SciTech Connect

    Wang, Xin; Qiu, Liang Li, Song; Zhang, Chi; Ye, Bin

    2015-01-15

    We establish the relations between quantum discord and the quantum dense coding capacity in (n + 1)-particle quantum states. A necessary condition for the vanishing discord monogamy score is given. We also find that the loss of quantum dense coding capacity due to decoherence is bounded below by the sum of quantum discord. When these results are restricted to three-particle quantum states, some complementarity relations are obtained.

  7. Relating quantum discord with the quantum dense coding capacity

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Qiu, Liang; Li, Song; Zhang, Chi; Ye, Bin

    2015-01-01

    We establish the relations between quantum discord and the quantum dense coding capacity in ( n + 1)-particle quantum states. A necessary condition for the vanishing discord monogamy score is given. We also find that the loss of quantum dense coding capacity due to decoherence is bounded below by the sum of quantum discord. When these results are restricted to three-particle quantum states, some complementarity relations are obtained.

  8. Computer codes for dispersion of dense gas

    SciTech Connect

    Weber, A.H.; Watts, J.R.

    1982-02-01

    Two models for describing the behavior of dense gases have been adapted for specific applications at the Savannah River Plant (SRP) and have been programmed on the IBM computer. One of the models has been used to predict the effect of a ruptured H/sub 2/S storage tank at the 400 Area. The other model has been used to simulate the effect of an unignited release of H/sub 2/S from the 400-Area flare tower.

  9. Code Optimization Techniques

    SciTech Connect

    MAGEE,GLEN I.

    2000-08-03

    Computers transfer data in a number of different ways. Whether through a serial port, a parallel port, over a modem, over an ethernet cable, or internally from a hard disk to memory, some data will be lost. To compensate for that loss, numerous error detection and correction algorithms have been developed. One of the most common error correction codes is the Reed-Solomon code, which is a special subset of BCH (Bose-Chaudhuri-Hocquenghem) linear cyclic block codes. In the AURA project, an unmanned aircraft sends the data it collects back to earth so it can be analyzed during flight and possible flight modifications made. To counter possible data corruption during transmission, the data is encoded using a multi-block Reed-Solomon implementation with a possibly shortened final block. In order to maximize the amount of data transmitted, it was necessary to reduce the computation time of a Reed-Solomon encoding to three percent of the processor's time. To achieve such a reduction, many code optimization techniques were employed. This paper outlines the steps taken to reduce the processing time of a Reed-Solomon encoding and the insight into modern optimization techniques gained from the experience.

  10. Parallel sparse and dense information coding streams in the electrosensory midbrain

    PubMed Central

    Sproule, Michael K.J.; Metzen, Michael G.; Chacron, Maurice J.

    2015-01-01

    Efficient processing of incoming sensory information is critical for an organism’s survival. It has been widely observed across systems and species that the representation of sensory information changes across successive brain areas. Indeed, peripheral sensory neurons tend to respond densely to a broad range of sensory stimuli while more central neurons tend to instead respond sparsely to a narrow range of stimuli. Such a transition might be advantageous as sparse neural codes are thought to be metabolically efficient and optimize coding efficiency. Here we investigated whether the neural code transitions from dense to sparse within the midbrain Torus semicircularis (TS) of weakly electric fish. Confirming previous results, we found both dense and sparse coding neurons. However, subsequent histological classification revealed that most dense neurons projected to higher brain areas. Our results thus provide strong evidence against the hypothesis that the neural code transitions from dense to sparse in the electrosensory system. Rather, they support the alternative hypothesis that higher brain areas receive parallel streams of dense and sparse coded information from the electrosensory midbrain. We discuss the implications and possible advantages of such a coding strategy and argue that it is a general feature of sensory processing. PMID:26375927

  11. Parallel sparse and dense information coding streams in the electrosensory midbrain.

    PubMed

    Sproule, Michael K J; Metzen, Michael G; Chacron, Maurice J

    2015-10-21

    Efficient processing of incoming sensory information is critical for an organism's survival. It has been widely observed across systems and species that the representation of sensory information changes across successive brain areas. Indeed, peripheral sensory neurons tend to respond densely to a broad range of sensory stimuli while more central neurons tend to instead respond sparsely to a narrow range of stimuli. Such a transition might be advantageous as sparse neural codes are thought to be metabolically efficient and optimize coding efficiency. Here we investigated whether the neural code transitions from dense to sparse within the midbrain Torus semicircularis (TS) of weakly electric fish. Confirming previous results, we found both dense and sparse coding neurons. However, subsequent histological classification revealed that most dense neurons projected to higher brain areas. Our results thus provide strong evidence against the hypothesis that the neural code transitions from dense to sparse in the electrosensory system. Rather, they support the alternative hypothesis that higher brain areas receive parallel streams of dense and sparse coded information from the electrosensory midbrain. We discuss the implications and possible advantages of such a coding strategy and argue that it is a general feature of sensory processing. PMID:26375927

  12. Controlled Dense Coding Using the Maximal Slice States

    NASA Astrophysics Data System (ADS)

    Liu, Jun; Mo, Zhi-wen; Sun, Shu-qin

    2016-04-01

    In this paper we investigate the controlled dense coding with the maximal slice states. Three schemes are presented. Our schemes employ the maximal slice states as quantum channel, which consists of the tripartite entangled state from the first party(Alice), the second party(Bob), the third party(Cliff). The supervisor(Cliff) can supervises and controls the channel between Alice and Bob via measurement. Through carrying out local von Neumann measurement, controlled-NOT operation and positive operator-valued measure(POVM), and introducing an auxiliary particle, we can obtain the success probability of dense coding. It is shown that the success probability of information transmitted from Alice to Bob is usually less than one. The average amount of information for each scheme is calculated in detail. These results offer deeper insight into quantum dense coding via quantum channels of partially entangled states.

  13. Deterministic dense coding and faithful teleportation with multipartite graph states

    SciTech Connect

    Huang, C.-Y.; Yu, I-C.; Lin, F.-L.; Hsu, L.-Y.

    2009-05-15

    We propose schemes to perform the deterministic dense coding and faithful teleportation with multipartite graph states. We also find the sufficient and necessary condition of a viable graph state for the proposed schemes. That is, for the associated graph, the reduced adjacency matrix of the Tanner-type subgraph between senders and receivers should be invertible.

  14. Comment I on ''Dense coding in entangled states''

    SciTech Connect

    Wojcik, Antoni; Grudka, Andrzej

    2003-07-01

    In this Comment we question the recent analysis of two dense coding protocols presented by Lee, Ahn, and Hwang [Phys. Rev. A 66, 024304 (2002)]. We argue that in the case of two-party communication protocol, there is no reason for using a maximally entangled state of more than two qubits.

  15. Secure N-dimensional simultaneous dense coding and applications

    NASA Astrophysics Data System (ADS)

    Situ, H.; Qiu, D.; Mateus, P.; Paunković, N.

    2015-12-01

    Simultaneous dense coding (SDC) guarantees that Bob and Charlie simultaneously receive their respective information from Alice in their respective processes of dense coding. The idea is to use the so-called locking operation to “lock” the entanglement channels, thus requiring a joint unlocking operation by Bob and Charlie in order to simultaneously obtain the information sent by Alice. We present some new results on SDC: (1) We propose three SDC protocols, which use different N-dimensional entanglement (Bell state, W state and GHZ state). (2) Besides the quantum Fourier transform, two new locking operators are introduced (the double controlled-NOT operator and the SWAP operator). (3) In the case that spatially distant Bob and Charlie have to finalize the protocol by implementing the unlocking operation through communication, we improve our protocol’s fairness, with respect to Bob and Charlie, by implementing the unlocking operation in series of steps. (4) We improve the security of SDC against the intercept-resend attack. (5) We show that SDC can be used to implement a fair contract signing protocol. (6) We also show that the N-dimensional quantum Fourier transform can act as the locking operator in simultaneous teleportation of N-level quantum systems.

  16. Continuous-variable dense coding via a general Gaussian state: Monogamy relation

    NASA Astrophysics Data System (ADS)

    Lee, Jaehak; Ji, Se-Wan; Park, Jiyong; Nha, Hyunchul

    2014-08-01

    We study a continuous-variable dense coding protocol, originally proposed to employ a two-mode squeezed state, using a general two-mode Gaussian state as a quantum channel. We particularly obtain conditions to manifest quantum advantage by beating two well-known single-mode schemes, namely, the squeezed-state scheme (best Gaussian scheme) and the number-state scheme (optimal scheme achieving the Holevo bound). We then extend our study to a multipartite Gaussian state and investigate the monogamy of operational entanglement measured by the communication capacity under the dense coding protocol. We show that this operational entanglement represents a strict monogamy relation, by means of Heisenberg's uncertainty principle among different parties; i.e., the quantum advantage for communication can be possible for only one pair of two-mode systems among many parties.

  17. SWOC: Spectral Wavelength Optimization Code

    NASA Astrophysics Data System (ADS)

    Ruchti, G. R.

    2016-06-01

    SWOC (Spectral Wavelength Optimization Code) determines the wavelength ranges that provide the optimal amount of information to achieve the required science goals for a spectroscopic study. It computes a figure-of-merit for different spectral configurations using a user-defined list of spectral features, and, utilizing a set of flux-calibrated spectra, determines the spectral regions showing the largest differences among the spectra.

  18. Modular optimization code package: MOZAIK

    NASA Astrophysics Data System (ADS)

    Bekar, Kursat B.

    This dissertation addresses the development of a modular optimization code package, MOZAIK, for geometric shape optimization problems in nuclear engineering applications. MOZAIK's first mission, determining the optimal shape of the D2O moderator tank for the current and new beam tube configurations for the Penn State Breazeale Reactor's (PSBR) beam port facility, is used to demonstrate its capabilities and test its performance. MOZAIK was designed as a modular optimization sequence including three primary independent modules: the initializer, the physics and the optimizer, each having a specific task. By using fixed interface blocks among the modules, the code attains its two most important characteristics: generic form and modularity. The benefit of this modular structure is that the contents of the modules can be switched depending on the requirements of accuracy, computational efficiency, or compatibility with the other modules. Oak Ridge National Laboratory's discrete ordinates transport code TORT was selected as the transport solver in the physics module of MOZAIK, and two different optimizers, Min-max and Genetic Algorithms (GA), were implemented in the optimizer module of the code package. A distributed memory parallelism was also applied to MOZAIK via MPI (Message Passing Interface) to execute the physics module concurrently on a number of processors for various states in the same search. Moreover, dynamic scheduling was enabled to enhance load balance among the processors while running MOZAIK's physics module thus improving the parallel speedup and efficiency. In this way, the total computation time consumed by the physics module is reduced by a factor close to M, where M is the number of processors. This capability also encourages the use of MOZAIK for shape optimization problems in nuclear applications because many traditional codes related to radiation transport do not have parallel execution capability. A set of computational models based on the

  19. Study of controlled dense coding with some discrete tripartite and quadripartite states

    NASA Astrophysics Data System (ADS)

    Roy, Sovik; Ghosh, Biplab

    2015-07-01

    The paper presents a detailed study of controlled dense coding scheme for different types of three and four-particle states. It consists of GHZ state, GHZ type states, maximal slice (MS), state, 4-particle GHZ state and W class of states. It is shown that GHZ-type states can be used for controlled dense coding in a probabilistic sense. We have shown relations among parameter of GHZ type state, concurrence of the shared bipartite state by two parties with respect to GHZ type and Charlie's measurement angle θ. The GHZ states as a special case of MS states, depending on parameters, have also been considered here. We have seen that tripartite W state and quadripartite W state cannot be used in controlled dense coding whereas |Wn>ABC states can be used probabilistically. Finally, we have investigated controlled dense coding scheme for tripartite qutrit states.

  20. Optimal Codes for the Burst Erasure Channel

    NASA Astrophysics Data System (ADS)

    Hamkins, J.

    2008-08-01

    We make the simple observation that the erasure burst correction capability of any (n, k) code can be extended to arbitrary lengths above n with the use of a block interleaver, and discuss nuances of this property when channel symbols are over GF(p) and the code is defined over GF(p^J), J > 1. The results imply that maximum distance separable codes (e.g., Reed-Solomon) offer optimal burst erasure protection with linear complexity, and that the optimality does not depend on the length of the code.

  1. TRACKING CODE DEVELOPMENT FOR BEAM DYNAMICS OPTIMIZATION

    SciTech Connect

    Yang, L.

    2011-03-28

    Dynamic aperture (DA) optimization with direct particle tracking is a straight forward approach when the computing power is permitted. It can have various realistic errors included and is more close than theoretical estimations. In this approach, a fast and parallel tracking code could be very helpful. In this presentation, we describe an implementation of storage ring particle tracking code TESLA for beam dynamics optimization. It supports MPI based parallel computing and is robust as DA calculation engine. This code has been used in the NSLS-II dynamics optimizations and obtained promising performance.

  2. Search for optimal distance spectrum convolutional codes

    NASA Technical Reports Server (NTRS)

    Connor, Matthew C.; Perez, Lance C.; Costello, Daniel J., Jr.

    1993-01-01

    In order to communicate reliably and to reduce the required transmitter power, NASA uses coded communication systems on most of their deep space satellites and probes (e.g. Pioneer, Voyager, Galileo, and the TDRSS network). These communication systems use binary convolutional codes. Better codes make the system more reliable and require less transmitter power. However, there are no good construction techniques for convolutional codes. Thus, to find good convolutional codes requires an exhaustive search over the ensemble of all possible codes. In this paper, an efficient convolutional code search algorithm was implemented on an IBM RS6000 Model 580. The combination of algorithm efficiency and computational power enabled us to find, for the first time, the optimal rate 1/2, memory 14, convolutional code.

  3. DSP code optimization based on cache

    NASA Astrophysics Data System (ADS)

    Xu, Chengfa; Li, Chengcheng; Tang, Bin

    2013-03-01

    DSP program's running efficiency on board is often lower than which via the software simulation during the program development, which is mainly resulted from the user's improper use and incomplete understanding of the cache-based memory. This paper took the TI TMS320C6455 DSP as an example, analyzed its two-level internal cache, and summarized the methods of code optimization. Processor can achieve its best performance when using these code optimization methods. At last, a specific algorithm application in radar signal processing is proposed. Experiment result shows that these optimization are efficient.

  4. Optimal Codes for the Burst Erasure Channel

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon

    2010-01-01

    Deep space communications over noisy channels lead to certain packets that are not decodable. These packets leave gaps, or bursts of erasures, in the data stream. Burst erasure correcting codes overcome this problem. These are forward erasure correcting codes that allow one to recover the missing gaps of data. Much of the recent work on this topic concentrated on Low-Density Parity-Check (LDPC) codes. These are more complicated to encode and decode than Single Parity Check (SPC) codes or Reed-Solomon (RS) codes, and so far have not been able to achieve the theoretical limit for burst erasure protection. A block interleaved maximum distance separable (MDS) code (e.g., an SPC or RS code) offers near-optimal burst erasure protection, in the sense that no other scheme of equal total transmission length and code rate could improve the guaranteed correctible burst erasure length by more than one symbol. The optimality does not depend on the length of the code, i.e., a short MDS code block interleaved to a given length would perform as well as a longer MDS code interleaved to the same overall length. As a result, this approach offers lower decoding complexity with better burst erasure protection compared to other recent designs for the burst erasure channel (e.g., LDPC codes). A limitation of the design is its lack of robustness to channels that have impairments other than burst erasures (e.g., additive white Gaussian noise), making its application best suited for correcting data erasures in layers above the physical layer. The efficiency of a burst erasure code is the length of its burst erasure correction capability divided by the theoretical upper limit on this length. The inefficiency is one minus the efficiency. The illustration compares the inefficiency of interleaved RS codes to Quasi-Cyclic (QC) LDPC codes, Euclidean Geometry (EG) LDPC codes, extended Irregular Repeat Accumulate (eIRA) codes, array codes, and random LDPC codes previously proposed for burst erasure

  5. Development and Benchmarking of a Hybrid PIC Code For Dense Plasmas and Fast Ignition

    SciTech Connect

    Witherspoon, F. Douglas; Welch, Dale R.; Thompson, John R.; MacFarlane, Joeseph J.; Phillips, Michael W.; Bruner, Nicki; Mostrom, Chris; Thoma, Carsten; Clark, R. E.; Bogatu, Nick; Kim, Jin-Soo; Galkin, Sergei; Golovkin, Igor E.; Woodruff, P. R.; Wu, Linchun; Messer, Sarah J.

    2014-05-20

    Radiation processes play an important role in the study of both fast ignition and other inertial confinement schemes, such as plasma jet driven magneto-inertial fusion, both in their effect on energy balance, and in generating diagnostic signals. In the latter case, warm and hot dense matter may be produced by the convergence of a plasma shell formed by the merging of an assembly of high Mach number plasma jets. This innovative approach has the potential advantage of creating matter of high energy densities in voluminous amount compared with high power lasers or particle beams. An important application of this technology is as a plasma liner for the flux compression of magnetized plasma to create ultra-high magnetic fields and burning plasmas. HyperV Technologies Corp. has been developing plasma jet accelerator technology in both coaxial and linear railgun geometries to produce plasma jets of sufficient mass, density, and velocity to create such imploding plasma liners. An enabling tool for the development of this technology is the ability to model the plasma dynamics, not only in the accelerators themselves, but also in the resulting magnetized target plasma and within the merging/interacting plasma jets during transport to the target. Welch pioneered numerical modeling of such plasmas (including for fast ignition) using the LSP simulation code. Lsp is an electromagnetic, parallelized, plasma simulation code under development since 1995. It has a number of innovative features making it uniquely suitable for modeling high energy density plasmas including a hybrid fluid model for electrons that allows electrons in dense plasmas to be modeled with a kinetic or fluid treatment as appropriate. In addition to in-house use at Voss Scientific, several groups carrying out research in Fast Ignition (LLNL, SNL, UCSD, AWE (UK), and Imperial College (UK)) also use LSP. A collaborative team consisting of HyperV Technologies Corp., Voss Scientific LLC, FAR-TECH, Inc., Prism

  6. Overcoming a limitation of deterministic dense coding with a nonmaximally entangled initial state

    SciTech Connect

    Bourdon, P. S.; Gerjuoy, E.

    2010-02-15

    Under two-party deterministic dense coding, Alice communicates (perfectly distinguishable) messages to Bob via a qudit from a pair of entangled qudits in pure state |{Psi}>. If |{Psi}> represents a maximally entangled state (i.e., each of its Schmidt coefficients is {radical}(1/d)), then Alice can convey to Bob one of d{sup 2} distinct messages. If |{Psi}> is not maximally entangled, then Ji et al. [Phys. Rev. A 73, 034307 (2006)] have shown that under the original deterministic dense-coding protocol, in which messages are encoded by unitary operations performed on Alice's qudit, it is impossible to encode d{sup 2}-1 messages. Encoding d{sup 2}-2 messages is possible; see, for example, the numerical studies by Mozes et al. [Phys. Rev. A 71, 012311 (2005)]. Answering a question raised by Wu et al. [Phys. Rev. A 73, 042311 (2006)], we show that when |{Psi}> is not maximally entangled, the communications limit of d{sup 2}-2 messages persists even when the requirement that Alice encode by unitary operations on her qudit is weakened to allow encoding by more general quantum operators. We then describe a dense-coding protocol that can overcome this limitation with high probability, assuming the largest Schmidt coefficient of |{Psi}> is sufficiently close to {radical}(1/d). In this protocol, d{sup 2}-2 of the messages are encoded via unitary operations on Alice's qudit, and the final (d{sup 2}-1)-th message is encoded via a non-trace-preserving quantum operation.

  7. Optimizing Extender Code for NCSX Analyses

    SciTech Connect

    M. Richman, S. Ethier, and N. Pomphrey

    2008-01-22

    Extender is a parallel C++ code for calculating the magnetic field in the vacuum region of a stellarator. The code was optimized for speed and augmented with tools to maintain a specialized NetCDF database. Two parallel algorithms were examined. An even-block work-distribution scheme was comparable in performance to a master-slave scheme. Large speedup factors were achieved by representing the plasma surface with a spline rather than Fourier series. The accuracy of this representation and the resulting calculations relied on the density of the spline mesh. The Fortran 90 module db access was written to make it easy to store Extender output in a manageable database. New or updated data can be added to existing databases. A generalized PBS job script handles the generation of a database from scratch

  8. Complete Distributed Hyper-Entangled-Bell-State Analysis and Quantum Super Dense Coding

    NASA Astrophysics Data System (ADS)

    Zheng, Chunhong; Gu, Yongjian; Li, Wendong; Wang, Zhaoming; Zhang, Jiying

    2016-02-01

    We propose a protocol to implement the distributed hyper-entangled-Bell-state analysis (HBSA) for photonic qubits with weak cross-Kerr nonlinearities, QND photon-number-resolving detection, and some linear optical elements. The distinct feature of our scheme is that the BSA for two different degrees of freedom can be implemented deterministically and nondestructively. Based on the present HBSA, we achieve quantum super dense coding with double information capacity, which makes our scheme more significant for long-distance quantum communication.

  9. Effects of quantum noises and noisy quantum operations on entanglement and special dense coding

    SciTech Connect

    Quek, Sylvanus; Li Ziang; Yeo Ye

    2010-02-15

    We show how noncommuting noises could cause a Bell state {chi}{sub 0} to suffer entanglement sudden death (ESD). ESD may similarly occur when a noisy operation acts, if the corresponding Hamiltonian and Lindblad operator do not commute. We study the implications of these in special dense coding S. When noises that cause ESD act, we show that {chi}{sub 0} may lose its capacity for S before ESD occurs. Similarly, {chi}{sub 0} may fail to yield information transfer better than classically possible when the encoding operations are noisy, though entanglement is not destroyed in the process.

  10. Some optimal partial-unit-memory codes. [time-invariant binary convolutional codes

    NASA Technical Reports Server (NTRS)

    Lauer, G. S.

    1979-01-01

    A class of time-invariant binary convolutional codes is defined, called partial-unit-memory codes. These codes are optimal in the sense of having maximum free distance for given values of R, k (the number of encoder inputs), and mu (the number of encoder memory cells). Optimal codes are given for rates R = 1/4, 1/3, 1/2, and 2/3, with mu not greater than 4 and k not greater than mu + 3, whenever such a code is better than previously known codes. An infinite class of optimal partial-unit-memory codes is also constructed based on equidistant block codes.

  11. Scaling Optimization of the SIESTA MHD Code

    NASA Astrophysics Data System (ADS)

    Seal, Sudip; Hirshman, Steven; Perumalla, Kalyan

    2013-10-01

    SIESTA is a parallel three-dimensional plasma equilibrium code capable of resolving magnetic islands at high spatial resolutions for toroidal plasmas. Originally designed to exploit small-scale parallelism, SIESTA has now been scaled to execute efficiently over several thousands of processors P. This scaling improvement was accomplished with minimal intrusion to the execution flow of the original version. First, the efficiency of the iterative solutions was improved by integrating the parallel tridiagonal block solver code BCYCLIC. Krylov-space generation in GMRES was then accelerated using a customized parallel matrix-vector multiplication algorithm. Novel parallel Hessian generation algorithms were integrated and memory access latencies were dramatically reduced through loop nest optimizations and data layout rearrangement. These optimizations sped up equilibria calculations by factors of 30-50. It is possible to compute solutions with granularity N/P near unity on extremely fine radial meshes (N > 1024 points). Grid separation in SIESTA, which manifests itself primarily in the resonant components of the pressure far from rational surfaces, is strongly suppressed by finer meshes. Large problem sizes of up to 300 K simultaneous non-linear coupled equations have been solved on the NERSC supercomputers. Work supported by U.S. DOE under Contract DE-AC05-00OR22725 with UT-Battelle, LLC.

  12. Statistical physics, optimization and source coding

    NASA Astrophysics Data System (ADS)

    Zechhina, Riccardo

    2005-06-01

    The combinatorial problem of satisfying a given set of constraints that depend on N discrete variables is a fundamental one in optimization and coding theory. Even for instances of randomly generated problems, the question ``does there exist an assignment to the variables that satisfies all constraints?'' may become extraordinarily difficult to solve in some range of parameters where a glass phase sets in. We shall provide a brief review of the recent advances in the statistical mechanics approach to these satisfiability problems and show how the analytic results have helped to design a new class of message-passing algorithms -- the survey propagation (SP) algorithms -- that can efficiently solve some combinatorial problems considered intractable. As an application, we discuss how the packing properties of clusters of solutions in randomly generated satisfiability problems can be exploited in the design of simple lossy data compression algorithms.

  13. Optimality principles for the visual code

    NASA Astrophysics Data System (ADS)

    Pitkow, Xaq

    One way to try to make sense of the complexities of our visual system is to hypothesize that evolution has developed nearly optimal solutions to the problems organisms face in the environment. In this thesis, we study two such principles of optimality for the visual code. In the first half of this dissertation, we consider the principle of decorrelation. Influential theories assert that the center-surround receptive fields of retinal neurons remove spatial correlations present in the visual world. It has been proposed that this decorrelation serves to maximize information transmission to the brain by avoiding transfer of redundant information through optic nerve fibers of limited capacity. While these theories successfully account for several aspects of visual perception, the notion that the outputs of the retina are less correlated than its inputs has never been directly tested at the site of the putative information bottleneck, the optic nerve. We presented visual stimuli with naturalistic image correlations to the salamander retina while recording responses of many retinal ganglion cells using a microelectrode array. The output signals of ganglion cells are indeed decorrelated compared to the visual input, but the receptive fields are only partly responsible. Much of the decorrelation is due to the nonlinear processing by neurons rather than the linear receptive fields. This form of decorrelation dramatically limits information transmission. Instead of improving coding efficiency we show that the nonlinearity is well suited to enable a combinatorial code or to signal robust stimulus features. In the second half of this dissertation, we develop an ideal observer model for the task of discriminating between two small stimuli which move along an unknown retinal trajectory induced by fixational eye movements. The ideal observer is provided with the responses of a model retina and guesses the stimulus identity based on the maximum likelihood rule, which involves sums

  14. Performing aggressive code optimization with an ability to rollback changes made by the aggressive optimizations

    DOEpatents

    Gschwind, Michael K

    2013-07-23

    Mechanisms for aggressively optimizing computer code are provided. With these mechanisms, a compiler determines an optimization to apply to a portion of source code and determines if the optimization as applied to the portion of source code will result in unsafe optimized code that introduces a new source of exceptions being generated by the optimized code. In response to a determination that the optimization is an unsafe optimization, the compiler generates an aggressively compiled code version, in which the unsafe optimization is applied, and a conservatively compiled code version in which the unsafe optimization is not applied. The compiler stores both versions and provides them for execution. Mechanisms are provided for switching between these versions during execution in the event of a failure of the aggressively compiled code version. Moreover, predictive mechanisms are provided for predicting whether such a failure is likely.

  15. Optimal source codes for geometrically distributed integer alphabets

    NASA Technical Reports Server (NTRS)

    Gallager, R. G.; Van Voorhis, D. C.

    1975-01-01

    An approach is shown for using the Huffman algorithm indirectly to prove the optimality of a code for an infinite alphabet if an estimate concerning the nature of the code can be made. Attention is given to nonnegative integers with a geometric probability assignment. The particular distribution considered arises in run-length coding and in encoding protocol information in data networks. Questions of redundancy of the optimal code are also investigated.

  16. Optimal protein-folding codes from spin-glass theory.

    PubMed Central

    Goldstein, R A; Luthey-Schulten, Z A; Wolynes, P G

    1992-01-01

    Protein-folding codes embodied in sequence-dependent energy functions can be optimized using spin-glass theory. Optimal folding codes for associative-memory Hamiltonians based on aligned sequences are deduced. A screening method based on these codes correctly recognizes protein structures in the "twilight zone" of sequence identity in the overwhelming majority of cases. Simulated annealing for the optimally encoded Hamiltonian generally leads to qualitatively correct structures. Images PMID:1594594

  17. Robustly optimal rate one-half binary convolutional codes

    NASA Technical Reports Server (NTRS)

    Johannesson, R.

    1975-01-01

    Three optimality criteria for convolutional codes are considered in this correspondence: namely, free distance, minimum distance, and distance profile. Here we report the results of computer searches for rate one-half binary convolutional codes that are 'robustly optimal' in the sense of being optimal for one criterion and optimal or near-optimal for the other two criteria. Comparisons with previously known codes are made. The results of a computer simulation are reported to show the importance of the distance profile to computational performance with sequential decoding.

  18. Sparse coding based dense feature representation model for hyperspectral image classification

    NASA Astrophysics Data System (ADS)

    Oguslu, Ender; Zhou, Guoqing; Zheng, Zezhong; Iftekharuddin, Khan; Li, Jiang

    2015-11-01

    We present a sparse coding based dense feature representation model (a preliminary version of the paper was presented at the SPIE Remote Sensing Conference, Dresden, Germany, 2013) for hyperspectral image (HSI) classification. The proposed method learns a new representation for each pixel in HSI through the following four steps: sub-band construction, dictionary learning, encoding, and feature selection. The new representation usually has a very high dimensionality requiring a large amount of computational resources. We applied the l1/lq regularized multiclass logistic regression technique to reduce the size of the new representation. We integrated the method with a linear support vector machine (SVM) and a composite kernels SVM (CKSVM) to discriminate different types of land cover. We evaluated the proposed algorithm on three well-known HSI datasets and compared our method to four recently developed classification methods: SVM, CKSVM, simultaneous orthogonal matching pursuit, and image fusion and recursive filtering. Experimental results show that the proposed method can achieve better overall and average classification accuracies with a much more compact representation leading to more efficient sparse models for HSI classification.

  19. Optimality Of Variable-Length Codes

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Miller, Warner H.; Rice, Robert F.

    1994-01-01

    Report presents analysis of performances of conceptual Rice universal noiseless coders designed to provide efficient compression of data over wide range of source-data entropies. Includes predictive preprocessor that maps source data into sequence of nonnegative integers and variable-length-coding processor, which adapts to varying entropy of source data by selecting whichever one of number of optional codes yields shortest codeword.

  20. Analysis of the optimality of the standard genetic code.

    PubMed

    Kumar, Balaji; Saini, Supreet

    2016-07-19

    Many theories have been proposed attempting to explain the origin of the genetic code. While strong reasons remain to believe that the genetic code evolved as a frozen accident, at least for the first few amino acids, other theories remain viable. In this work, we test the optimality of the standard genetic code against approximately 17 million genetic codes, and locate 29 which outperform the standard genetic code at the following three criteria: (a) robustness to point mutation; (b) robustness to frameshift mutation; and (c) ability to encode additional information in the coding region. We use a genetic algorithm to generate and score codes from different parts of the associated landscape, which are, as a result, presumably more representative of the entire landscape. Our results show that while the genetic code is sub-optimal for robustness to frameshift mutation and the ability to encode additional information in the coding region, it is very strongly selected for robustness to point mutation. This coupled with the observation that the different performance indicator scores for a particular genetic code are negatively correlated makes the standard genetic code nearly optimal for the three criteria tested in this work. PMID:27327359

  1. Optimizing ATLAS code with different profilers

    NASA Astrophysics Data System (ADS)

    Kama, S.; Seuster, R.; Stewart, G. A.; Vitillo, R. A.

    2014-06-01

    After the current maintenance period, the LHC will provide higher energy collisions with increased luminosity. In order to keep up with these higher rates, ATLAS software needs to speed up substantially. However, ATLAS code is composed of approximately 6M lines, written by many different programmers with different backgrounds, which makes code optimisation a challenge. To help with this effort different profiling tools and techniques are being used. These include well known tools, such as the Valgrind suite and Intel Amplifier; less common tools like Pin, PAPI, and GOoDA; as well as techniques such as library interposing. In this paper we will mainly focus on Pin tools and GOoDA. Pin is a dynamic binary instrumentation tool which can obtain statistics such as call counts, instruction counts and interrogate functions' arguments. It has been used to obtain CLHEP Matrix profiles, operations and vector sizes for linear algebra calculations which has provided the insight necessary to achieve significant performance improvements. Complimenting this, GOoDA, an in-house performance tool built in collaboration with Google, which is based on hardware performance monitoring unit events, is used to identify hot-spots in the code for different types of hardware limitations, such as CPU resources, caches, or memory bandwidth. GOoDA has been used in improvement of the performance of new magnetic field code and identification of potential vectorization targets in several places, such as Runge-Kutta propagation code.

  2. One-shot absolute pattern for dense reconstruction using DeBruijn coding and Windowed Fourier Transform

    NASA Astrophysics Data System (ADS)

    Fernandez, Sergio; Salvi, Joaquim

    2013-03-01

    Shape reconstruction using coded structured light (SL) is considered one of the most reliable techniques to recover object surfaces. Among SL techniques, the achievement of dense acquisition for moving scenarios constitutes an active field of research. A common solution is to project a single one-shot fringe pattern, extracting depth from the phase deviation of the imaged pattern. However, the algorithms employed to unwrap the phase are computationally slow and can fail in the presence of depth discontinuities and occlusions. In this work, a proposal for a new one-shot dense pattern that combines DeBruijn and Windowed Fourier Transform to obtain a dense, absolute, accurate and computationally fast 3D reconstruction is presented and compared with other existing techniques.

  3. Optimization of focality and direction in dense electrode array transcranial direct current stimulation (tDCS)

    NASA Astrophysics Data System (ADS)

    Guler, Seyhmus; Dannhauer, Moritz; Erem, Burak; Macleod, Rob; Tucker, Don; Turovets, Sergei; Luu, Phan; Erdogmus, Deniz; Brooks, Dana H.

    2016-06-01

    Objective. Transcranial direct current stimulation (tDCS) aims to alter brain function non-invasively via electrodes placed on the scalp. Conventional tDCS uses two relatively large patch electrodes to deliver electrical current to the brain region of interest (ROI). Recent studies have shown that using dense arrays containing up to 512 smaller electrodes may increase the precision of targeting ROIs. However, this creates a need for methods to determine effective and safe stimulus patterns as the number of degrees of freedom is much higher with such arrays. Several approaches to this problem have appeared in the literature. In this paper, we describe a new method for calculating optimal electrode stimulus patterns for targeted and directional modulation in dense array tDCS which differs in some important aspects with methods reported to date. Approach. We optimize stimulus pattern of dense arrays with fixed electrode placement to maximize the current density in a particular direction in the ROI. We impose a flexible set of safety constraints on the current power in the brain, individual electrode currents, and total injected current, to protect subject safety. The proposed optimization problem is convex and thus efficiently solved using existing optimization software to find unique and globally optimal electrode stimulus patterns. Main results. Solutions for four anatomical ROIs based on a realistic head model are shown as exemplary results. To illustrate the differences between our approach and previously introduced methods, we compare our method with two of the other leading methods in the literature. We also report on extensive simulations that show the effect of the values chosen for each proposed safety constraint bound on the optimized stimulus patterns. Significance. The proposed optimization approach employs volume based ROIs, easily adapts to different sets of safety constraints, and takes negligible time to compute. An in-depth comparison study gives

  4. Optimization of KINETICS Chemical Computation Code

    NASA Technical Reports Server (NTRS)

    Donastorg, Cristina

    2012-01-01

    NASA JPL has been creating a code in FORTRAN called KINETICS to model the chemistry of planetary atmospheres. Recently there has been an effort to introduce Message Passing Interface (MPI) into the code so as to cut down the run time of the program. There has been some implementation of MPI into KINETICS; however, the code could still be more efficient than it currently is. One way to increase efficiency is to send only certain variables to all the processes when an MPI subroutine is called and to gather only certain variables when the subroutine is finished. Therefore, all the variables that are used in three of the main subroutines needed to be investigated. Because of the sheer amount of code that there is to comb through this task was given as a ten-week project. I have been able to create flowcharts outlining the subroutines, common blocks, and functions used within the three main subroutines. From these flowcharts I created tables outlining the variables used in each block and important information about each. All this information will be used to determine how to run MPI in KINETICS in the most efficient way possible.

  5. Effects of intrinsic decoherence on various correlations and quantum dense coding in a two superconducting charge qubit system

    NASA Astrophysics Data System (ADS)

    Wang, Fei; Maimaitiyiming-Tusun; Parouke-Paerhati; Ahmad-Abliz

    2015-09-01

    The influence of intrinsic decoherence on various correlations and dense coding in a model which consists of two identical superconducting charge qubits coupled by a fixed capacitor is investigated. The results show that, despite the intrinsic decoherence, the correlations as well as the dense coding channel capacity can be effectively increased via the combination of system parameters, i.e., the mutual coupling energy between the two charge qubits is larger than the Josephson energy of the qubit. The bigger the difference between them is, the better the effect is. Project supported by the Project to Develop Outstanding Young Scientific Talents of China (Grant No. 2013711019), the Natural Science Foundation of Xinjiang Province, China (Grant No. 2012211A052), the Foundation for Key Program of Ministry of Education of China (Grant No. 212193), and the Innovative Foundation for Graduate Students Granted by the Key Subjects of Theoretical Physics of Xinjiang Province, China (Grant No. LLWLL201301).

  6. Optimizing Nuclear Physics Codes on the XT5

    SciTech Connect

    Hartman-Baker, Rebecca J; Nam, Hai Ah

    2011-01-01

    Scientists studying the structure and behavior of the atomic nucleus require immense high-performance computing resources to gain scientific insights. Several nuclear physics codes are capable of scaling to more than 100,000 cores on Oak Ridge National Laboratory's petaflop Cray XT5 system, Jaguar. In this paper, we present our work on optimizing codes in the nuclear physics domain.

  7. Optimization of Particle-in-Cell Codes on RISC Processors

    NASA Technical Reports Server (NTRS)

    Decyk, Viktor K.; Karmesin, Steve Roy; Boer, Aeint de; Liewer, Paulette C.

    1996-01-01

    General strategies are developed to optimize particle-cell-codes written in Fortran for RISC processors which are commonly used on massively parallel computers. These strategies include data reorganization to improve cache utilization and code reorganization to improve efficiency of arithmetic pipelines.

  8. Optimal periodic binary codes of lengths 28 to 64

    NASA Technical Reports Server (NTRS)

    Tyler, S.; Keston, R.

    1980-01-01

    Results from computer searches performed to find repeated binary phase coded waveforms with optimal periodic autocorrelation functions are discussed. The best results for lengths 28 to 64 are given. The code features of major concern are where (1) the peak sidelobe in the autocorrelation function is small and (2) the sum of the squares of the sidelobes in the autocorrelation function is small.

  9. The effect of code expanding optimizations on instruction cache design

    NASA Technical Reports Server (NTRS)

    Chen, William Y.; Chang, Pohua P.; Conte, Thomas M.; Hwu, Wen-Mei W.

    1991-01-01

    It is shown that code expanding optimizations have strong and non-intuitive implications on instruction cache design. Three types of code expanding optimizations are studied: instruction placement, function inline expansion, and superscalar optimizations. Overall, instruction placement reduces the miss ratio of small caches. Function inline expansion improves the performance for small cache sizes, but degrades the performance of medium caches. Superscalar optimizations increases the cache size required for a given miss ratio. On the other hand, they also increase the sequentiality of instruction access so that a simple load-forward scheme effectively cancels the negative effects. Overall, it is shown that with load forwarding, the three types of code expanding optimizations jointly improve the performance of small caches and have little effect on large caches.

  10. A Systematic Method of Interconnection Optimization for Dense-Array Concentrator Photovoltaic System

    PubMed Central

    Siaw, Fei-Lu

    2013-01-01

    This paper presents a new systematic approach to analyze all possible array configurations in order to determine the most optimal dense-array configuration for concentrator photovoltaic (CPV) systems. The proposed method is fast, simple, reasonably accurate, and very useful as a preliminary study before constructing a dense-array CPV panel. Using measured flux distribution data, each CPV cells' voltage and current values at three critical points which are at short-circuit, open-circuit, and maximum power point are determined. From there, an algorithm groups the cells into basic modules. The next step is I-V curve prediction, to find the maximum output power of each array configuration. As a case study, twenty different I-V predictions are made for a prototype of nonimaging planar concentrator, and the array configuration that yields the highest output power is determined. The result is then verified by assembling and testing of an actual dense-array on the prototype. It was found that the I-V curve closely resembles simulated I-V prediction, and measured maximum output power varies by only 1.34%. PMID:24453823

  11. A systematic method of interconnection optimization for dense-array concentrator photovoltaic system.

    PubMed

    Siaw, Fei-Lu; Chong, Kok-Keong

    2013-01-01

    This paper presents a new systematic approach to analyze all possible array configurations in order to determine the most optimal dense-array configuration for concentrator photovoltaic (CPV) systems. The proposed method is fast, simple, reasonably accurate, and very useful as a preliminary study before constructing a dense-array CPV panel. Using measured flux distribution data, each CPV cells' voltage and current values at three critical points which are at short-circuit, open-circuit, and maximum power point are determined. From there, an algorithm groups the cells into basic modules. The next step is I-V curve prediction, to find the maximum output power of each array configuration. As a case study, twenty different I-V predictions are made for a prototype of nonimaging planar concentrator, and the array configuration that yields the highest output power is determined. The result is then verified by assembling and testing of an actual dense-array on the prototype. It was found that the I-V curve closely resembles simulated I-V prediction, and measured maximum output power varies by only 1.34%. PMID:24453823

  12. Optimal Grouping and Matching for Network-Coded Cooperative Communications

    SciTech Connect

    Sharma, S; Shi, Y; Hou, Y T; Kompella, S; Midkiff, S F

    2011-11-01

    Network-coded cooperative communications (NC-CC) is a new advance in wireless networking that exploits network coding (NC) to improve the performance of cooperative communications (CC). However, there remains very limited understanding of this new hybrid technology, particularly at the link layer and above. This paper fills in this gap by studying a network optimization problem that requires joint optimization of session grouping, relay node grouping, and matching of session/relay groups. After showing that this problem is NP-hard, we present a polynomial time heuristic algorithm to this problem. Using simulation results, we show that our algorithm is highly competitive and can produce near-optimal results.

  13. Simulation of two- and three-dimensional dense solute plume behavior with the the METROPOL-3 code

    SciTech Connect

    Oostrom, M.; Roberson, K.R.; Leijnse, A.

    1994-07-01

    Contaminant plumes emanating from waste disposal facilities are often denser than the ambient groundwater. These so-called dense plumes sink deeper into phreatic aquifers and may, under certain conditions, become unstable. The behavior of variable density, aqueous-phase contaminant plumes in saturated, homogeneous 2-D and 3-D intermediate-scale aquifer models was investigated with the finite element code METROPOL-3. The numerical results compare, in a quantitative sense, to previously reported laboratory-scale transport experiments. The simulations show that dense plumes are more likely to penetrate deeper into aquifers and eventually become unstable with increasing density differences between the leachate solution and the ambient groundwater, and other important parameters as the saturated hydraulic conductivity of the porous medium, leakage-rate of the contaminant solution, and source width. The significance of unstable behavior decreases with increasing dispersivity values. It was observed that 3-D flow patterns have a stable effect on sense contaminant plume behavior.

  14. Performance optimization of dense-array concentrator photovoltaic system considering effects of circumsolar radiation and slope error.

    PubMed

    Wong, Chee-Woon; Chong, Kok-Keong; Tan, Ming-Hui

    2015-07-27

    This paper presents an approach to optimize the electrical performance of dense-array concentrator photovoltaic system comprised of non-imaging dish concentrator by considering the circumsolar radiation and slope error effects. Based on the simulated flux distribution, a systematic methodology to optimize the layout configuration of solar cells interconnection circuit in dense array concentrator photovoltaic module has been proposed by minimizing the current mismatch caused by non-uniformity of concentrated sunlight. An optimized layout of interconnection solar cells circuit with minimum electrical power loss of 6.5% can be achieved by minimizing the effects of both circumsolar radiation and slope error. PMID:26367685

  15. Fast and Accurate Construction of Ultra-Dense Consensus Genetic Maps Using Evolution Strategy Optimization

    PubMed Central

    Mester, David; Ronin, Yefim; Schnable, Patrick; Aluru, Srinivas; Korol, Abraham

    2015-01-01

    Our aim was to develop a fast and accurate algorithm for constructing consensus genetic maps for chip-based SNP genotyping data with a high proportion of shared markers between mapping populations. Chip-based genotyping of SNP markers allows producing high-density genetic maps with a relatively standardized set of marker loci for different mapping populations. The availability of a standard high-throughput mapping platform simplifies consensus analysis by ignoring unique markers at the stage of consensus mapping thereby reducing mathematical complicity of the problem and in turn analyzing bigger size mapping data using global optimization criteria instead of local ones. Our three-phase analytical scheme includes automatic selection of ~100-300 of the most informative (resolvable by recombination) markers per linkage group, building a stable skeletal marker order for each data set and its verification using jackknife re-sampling, and consensus mapping analysis based on global optimization criterion. A novel Evolution Strategy optimization algorithm with a global optimization criterion presented in this paper is able to generate high quality, ultra-dense consensus maps, with many thousands of markers per genome. This algorithm utilizes "potentially good orders" in the initial solution and in the new mutation procedures that generate trial solutions, enabling to obtain a consensus order in reasonable time. The developed algorithm, tested on a wide range of simulated data and real world data (Arabidopsis), outperformed two tested state-of-the-art algorithms by mapping accuracy and computation time. PMID:25867943

  16. Code optimization for tagged-token data flow machines

    SciTech Connect

    WimBohm, A.P.; Sargeant, J. . Computer Center)

    1989-01-01

    The efficiency of dataflow code generated from a high-level language can be improved dramatically by both conventional and dataflow-specific optimizations. Such techniques are used in implementing the single-assignment language SISAL on the Manchester Dataflow Machine. The quality of code generated for numeric applications can be measured in terms of the ratio of total number of instructions executed to floating point operations: the MIPS/MFLOPS ratio. Relevant features of the general purpose single-assignment language SISAL and the Manchester Dataflow Machine are introduced. After an assessment of the initial SISAL implementation, showing it to be very expensive, a range of optimizations are described.

  17. Code optimization for tagged-token dataflow machines

    SciTech Connect

    Bohm, A.P.W.; Sargeant, J.

    1989-01-01

    The efficiency of dataflow code generated from a high-level language can be improved dramatically by both conventional and dataflow-specific optimizations. Such techniques are used in implementing the single-assignment language SISAL on the Manchester Dataflow Machine. The quality of code generated for numeric applications can be measured in terms of the ratio of total number of instructions executed to floating point operations: the MIPS/MFLOPS ratio. Relevant features of the general purpose single-assignment language SISAL and the Manchester Dataflow are introduced. After an assessment of the initial SISAL implementation, showing it to be very expensive, a range of optimizations are described.

  18. Casting polymer nets to optimize noisy molecular codes

    PubMed Central

    Tlusty, Tsvi

    2008-01-01

    Life relies on the efficient performance of molecular codes, which relate symbols and meanings via error-prone molecular recognition. We describe how optimizing a code to withstand the impact of molecular recognition noise may be understood from the statistics of a two-dimensional network made of polymers. The noisy code is defined by partitioning the space of symbols into regions according to their meanings. The “polymers” are the boundaries between these regions, and their statistics define the cost and the quality of the noisy code. When the parameters that control the cost–quality balance are varied, the polymer network undergoes a transition, where the number of encoded meanings rises discontinuously. Effects of population dynamics on the evolution of molecular codes are discussed. PMID:18550822

  19. Optimized design and research of secondary microprism for dense array concentrating photovoltaic module

    NASA Astrophysics Data System (ADS)

    Yang, Guanghui; Chen, Bingzhen; Liu, Youqiang; Guo, Limin; Yao, Shun; Wang, Zhiyong

    2015-10-01

    As the critical component of concentrating photovoltaic module, secondary concentrators can be effective in increasing the acceptance angle and incident light, as well as improving the energy uniformity of focal spots. This paper presents a design of transmission-type secondary microprism for dense array concentrating photovoltaic module. The 3-D model of this design is established by Solidworks and important parameters such as inclination angle and component height are optimized using Zemax. According to the design and simulation results, several secondary microprisms with different parameters are fabricated and tested in combination with Fresnel lens and multi-junction solar cell. The sun-simulator IV test results show that the combination has the highest output power when secondary microprism height is 5mm and top facet side length is 7mm. Compared with the case without secondary microprism, the output power can improve 11% after the employment of secondary microprisms, indicating the indispensability of secondary microprisms in concentrating photovoltaic module.

  20. Optimal block cosine transform image coding for noisy channels

    NASA Technical Reports Server (NTRS)

    Vaishampayan, V.; Farvardin, N.

    1986-01-01

    The two dimensional block transform coding scheme based on the discrete cosine transform was studied extensively for image coding applications. While this scheme has proven to be efficient in the absence of channel errors, its performance degrades rapidly over noisy channels. A method is presented for the joint source channel coding optimization of a scheme based on the 2-D block cosine transform when the output of the encoder is to be transmitted via a memoryless design of the quantizers used for encoding the transform coefficients. This algorithm produces a set of locally optimum quantizers and the corresponding binary code assignment for the assumed transform coefficient statistics. To determine the optimum bit assignment among the transform coefficients, an algorithm was used based on the steepest descent method, which under certain convexity conditions on the performance of the channel optimized quantizers, yields the optimal bit allocation. Comprehensive simulation results for the performance of this locally optimum system over noisy channels were obtained and appropriate comparisons against a reference system designed for no channel error were rendered.

  1. Modeling Warm Dense Matter Experiments using the 3D ALE-AMR Code and the Move Toward Exascale Computing

    SciTech Connect

    Koniges, A; Eder, E; Liu, W; Barnard, J; Friedman, A; Logan, G; Fisher, A; Masers, N; Bertozzi, A

    2011-11-04

    The Neutralized Drift Compression Experiment II (NDCX II) is an induction accelerator planned for initial commissioning in 2012. The final design calls for a 3 MeV, Li+ ion beam, delivered in a bunch with characteristic pulse duration of 1 ns, and transverse dimension of order 1 mm. The NDCX II will be used in studies of material in the warm dense matter (WDM) regime, and ion beam/hydrodynamic coupling experiments relevant to heavy ion based inertial fusion energy. We discuss recent efforts to adapt the 3D ALE-AMR code to model WDM experiments on NDCX II. The code, which combines Arbitrary Lagrangian Eulerian (ALE) hydrodynamics with Adaptive Mesh Refinement (AMR), has physics models that include ion deposition, radiation hydrodynamics, thermal diffusion, anisotropic material strength with material time history, and advanced models for fragmentation. Experiments at NDCX-II will explore the process of bubble and droplet formation (two-phase expansion) of superheated metal solids using ion beams. Experiments at higher temperatures will explore equation of state and heavy ion fusion beam-to-target energy coupling efficiency. Ion beams allow precise control of local beam energy deposition providing uniform volumetric heating on a timescale shorter than that of hydrodynamic expansion. The ALE-AMR code does not have any export control restrictions and is currently running at the National Energy Research Scientific Computing Center (NERSC) at LBNL and has been shown to scale well to thousands of CPUs. New surface tension models that are being implemented and applied to WDM experiments. Some of the approaches use a diffuse interface surface tension model that is based on the advective Cahn-Hilliard equations, which allows for droplet breakup in divergent velocity fields without the need for imposed perturbations. Other methods require seeding or other methods for droplet breakup. We also briefly discuss the effects of the move to exascale computing and related

  2. A Simple Model of Optimal Population Coding for Sensory Systems

    PubMed Central

    Doi, Eizaburo; Lewicki, Michael S.

    2014-01-01

    A fundamental task of a sensory system is to infer information about the environment. It has long been suggested that an important goal of the first stage of this process is to encode the raw sensory signal efficiently by reducing its redundancy in the neural representation. Some redundancy, however, would be expected because it can provide robustness to noise inherent in the system. Encoding the raw sensory signal itself is also problematic, because it contains distortion and noise. The optimal solution would be constrained further by limited biological resources. Here, we analyze a simple theoretical model that incorporates these key aspects of sensory coding, and apply it to conditions in the retina. The model specifies the optimal way to incorporate redundancy in a population of noisy neurons, while also optimally compensating for sensory distortion and noise. Importantly, it allows an arbitrary input-to-output cell ratio between sensory units (photoreceptors) and encoding units (retinal ganglion cells), providing predictions of retinal codes at different eccentricities. Compared to earlier models based on redundancy reduction, the proposed model conveys more information about the original signal. Interestingly, redundancy reduction can be near-optimal when the number of encoding units is limited, such as in the peripheral retina. We show that there exist multiple, equally-optimal solutions whose receptive field structure and organization vary significantly. Among these, the one which maximizes the spatial locality of the computation, but not the sparsity of either synaptic weights or neural responses, is consistent with known basic properties of retinal receptive fields. The model further predicts that receptive field structure changes less with light adaptation at higher input-to-output cell ratios, such as in the periphery. PMID:25121492

  3. Optimal transform coding in the presence of quantization noise.

    PubMed

    Diamantaras, K I; Strintzis, M G

    1999-01-01

    The optimal linear Karhunen-Loeve transform (KLT) attains the minimum reconstruction error for a fixed number of transform coefficients assuming that these coefficients do not contain noise. In any real coding system, however, the representation of the coefficients using a finite number of bits requires the presence of quantizers. We formulate the optimal linear transform using a data model that incorporates the quantization noise. Our solution does not correspond to an orthogonal transform and in fact, it achieves a smaller mean squared error (MSE) compared to the KLT, in the noisy case. Like the KLT, our solution depends on the statistics of the input signal, but it also depends on the bit-rate used for each coefficient. Especially for images, based on our optimality theory, we propose a simple modification of the discrete cosine transform (DCT). Our coding experiments show a peak signal-to noise ratio (SNR) performance improvement over JPEG of the order of 0.2 dB with an overhead less than 0.01 b/pixel. PMID:18267426

  4. Optimal bounds for parity-oblivious random access codes

    NASA Astrophysics Data System (ADS)

    Chailloux, André; Kerenidis, Iordanis; Kundu, Srijita; Sikora, Jamie

    2016-04-01

    Random access coding is an information task that has been extensively studied and found many applications in quantum information. In this scenario, Alice receives an n-bit string x, and wishes to encode x into a quantum state {ρ }x, such that Bob, when receiving the state {ρ }x, can choose any bit i\\in [n] and recover the input bit x i with high probability. Here we study two variants: parity-oblivious random access codes (RACs), where we impose the cryptographic property that Bob cannot infer any information about the parity of any subset of bits of the input apart from the single bits x i ; and even-parity-oblivious RACs, where Bob cannot infer any information about the parity of any even-size subset of bits of the input. In this paper, we provide the optimal bounds for parity-oblivious quantum RACs and show that they are asymptotically better than the optimal classical ones. Our results provide a large non-contextuality inequality violation and resolve the main open problem in a work of Spekkens et al (2009 Phys. Rev. Lett. 102 010401). Second, we provide the optimal bounds for even-parity-oblivious RACs by proving their equivalence to a non-local game and by providing tight bounds for the success probability of the non-local game via semidefinite programming. In the case of even-parity-oblivious RACs, the cryptographic property holds also in the device independent model.

  5. Efficient sensory cortical coding optimizes pursuit eye movements.

    PubMed

    Liu, Bing; Macellaio, Matthew V; Osborne, Leslie C

    2016-01-01

    In the natural world, the statistics of sensory stimuli fluctuate across a wide range. In theory, the brain could maximize information recovery if sensory neurons adaptively rescale their sensitivity to the current range of inputs. Such adaptive coding has been observed in a variety of systems, but the premise that adaptation optimizes behaviour has not been tested. Here we show that adaptation in cortical sensory neurons maximizes information about visual motion in pursuit eye movements guided by that cortical activity. We find that gain adaptation drives a rapid (<100 ms) recovery of information after shifts in motion variance, because the neurons and behaviour rescale their sensitivity to motion fluctuations. Both neurons and pursuit rapidly adopt a response gain that maximizes motion information and minimizes tracking errors. Thus, efficient sensory coding is not simply an ideal standard but a description of real sensory computation that manifests in improved behavioural performance. PMID:27611214

  6. On optimization of integration properties of biphase coded signals

    NASA Astrophysics Data System (ADS)

    Qiu, Wanzhi; Xiang, Jingcheng

    Within the context of the requirements for agile waveforms with a large compression ratio in biphase coded radars and on the basis of the characteristics of interpulse integration processing of radar signals, the study proposes two sequence optimization criteria which are suitable for radar processing patterns: interpulse waveform agility - pulse compression - FFT, and MTI - pulse compression - noncoherent integration. Applications of these criteria to optimizing sequences of length 127 are carried out. The output peak ratio of mainlobe to sidelobe (RMS) is improved considerably without a weighting network, while the autocorrelation and cross correlation profles of the sequences are very satisfactory. The RMS of coherent integration and noncoherent integration of eight sequences are 34.12 and 28.1 dB, respectively, when the return signals have zero Doppler shift. These values are about 12 and 6 dB higher than the RMS of single signals before integration.

  7. Investigation of Navier-Stokes code verification and design optimization

    NASA Astrophysics Data System (ADS)

    Vaidyanathan, Rajkumar

    With rapid progress made in employing computational techniques for various complex Navier-Stokes fluid flow problems, design optimization problems traditionally based on empirical formulations and experiments are now being addressed with the aid of computational fluid dynamics (CFD). To be able to carry out an effective CFD-based optimization study, it is essential that the uncertainty and appropriate confidence limits of the CFD solutions be quantified over the chosen design space. The present dissertation investigates the issues related to code verification, surrogate model-based optimization and sensitivity evaluation. For Navier-Stokes (NS) CFD code verification a least square extrapolation (LSE) method is assessed. This method projects numerically computed NS solutions from multiple, coarser base grids onto a finer grid and improves solution accuracy by minimizing the residual of the discretized NS equations over the projected grid. In this dissertation, the finite volume (FV) formulation is focused on. The interplay between the concepts and the outcome of LSE, and the effects of solution gradients and singularities, nonlinear physics, and coupling of flow variables on the effectiveness of LSE are investigated. A CFD-based design optimization of a single element liquid rocket injector is conducted with surrogate models developed using response surface methodology (RSM) based on CFD solutions. The computational model consists of the NS equations, finite rate chemistry, and the k-epsilonturbulence closure. With the aid of these surrogate models, sensitivity and trade-off analyses are carried out for the injector design whose geometry (hydrogen flow angle, hydrogen and oxygen flow areas and oxygen post tip thickness) is optimized to attain desirable goals in performance (combustion length) and life/survivability (the maximum temperatures on the oxidizer post tip and injector face and a combustion chamber wall temperature). A preliminary multi

  8. Investigation of Navier-Stokes Code Verification and Design Optimization

    NASA Technical Reports Server (NTRS)

    Vaidyanathan, Rajkumar

    2004-01-01

    With rapid progress made in employing computational techniques for various complex Navier-Stokes fluid flow problems, design optimization problems traditionally based on empirical formulations and experiments are now being addressed with the aid of computational fluid dynamics (CFD). To be able to carry out an effective CFD-based optimization study, it is essential that the uncertainty and appropriate confidence limits of the CFD solutions be quantified over the chosen design space. The present dissertation investigates the issues related to code verification, surrogate model-based optimization and sensitivity evaluation. For Navier-Stokes (NS) CFD code verification a least square extrapolation (LSE) method is assessed. This method projects numerically computed NS solutions from multiple, coarser base grids onto a freer grid and improves solution accuracy by minimizing the residual of the discretized NS equations over the projected grid. In this dissertation, the finite volume (FV) formulation is focused on. The interplay between the xi concepts and the outcome of LSE, and the effects of solution gradients and singularities, nonlinear physics, and coupling of flow variables on the effectiveness of LSE are investigated. A CFD-based design optimization of a single element liquid rocket injector is conducted with surrogate models developed using response surface methodology (RSM) based on CFD solutions. The computational model consists of the NS equations, finite rate chemistry, and the k-6 turbulence closure. With the aid of these surrogate models, sensitivity and trade-off analyses are carried out for the injector design whose geometry (hydrogen flow angle, hydrogen and oxygen flow areas and oxygen post tip thickness) is optimized to attain desirable goals in performance (combustion length) and life/survivability (the maximum temperatures on the oxidizer post tip and injector face and a combustion chamber wall temperature). A preliminary multi-objective optimization

  9. ITER ICRF antenna analysis and optimization using the TOPICA code

    NASA Astrophysics Data System (ADS)

    Milanesio, D.; Maggiora, R.

    2010-02-01

    This paper documents the complete analysis and optimization of the ITER ion cyclotron range of frequency (ICRF) launcher using the TOPICA code, carried out in the frame of EFDA design activities. The possibility to simulate the detailed geometry of an ICRF antenna in front of a realistic plasma description and to obtain the antenna input parameters and the radiated near electric field distribution is of paramount importance to evaluate and predict the overall system performances. Upon starting from a reference geometry, we pursued a detailed electrical optimization of the IC launcher and we came out with a final geometry showing a remarkable increase in terms of power coupled to plasma. The optimization procedure involved the modification of different parts of the antenna, such as the horizontal septa, the coaxial cables, the coax-to-feeder transitions, the feeders, the strap and the grounding. Eventually, the optimized geometry has been the object of a comprehensive analysis, varying the working frequency, the plasma conditions and the poloidal and toroidal phasings between the feeding cables. The performances of the antenna have been appreciated not only in terms of input parameters or power coupled to plasma, but also by means of power spectra and with the evaluation of the RF potentials.

  10. Neutron Activation Analysis PRognosis and Optimization Code System.

    Energy Science and Technology Software Center (ESTSC)

    2004-08-20

    Version 00 NAAPRO predicts the results and main characteristics (detection limits, determination limits, measurement limits and relative precision of the analysis) of neutron activation analysis (instrumental and radiochemical). Gamma-ray dose rates for different points of time after sample irradiation and input count rate of the spectrometry system are also predicted. The code uses standard Windows user interface and extensive graphical tools for the visualization of the spectrometer characteristics (efficiency, response and background) and simulated spectrum.more » Optimization part is not included in the current version of the code. This release is designated NAAPRO, Version 01.beta. The MCNP code was used for generating detector responses. PREPRO-2000 and FCONV programs were used at the preparation of the program nuclear databases. A special program was developed for viewing, editing and updating of the program databases (not included into the present program package). The MCNP, PREPRO-2000 and FCONV software packages are not included in the NAAPRO package.« less

  11. Recent developments in DYNSUB: New models, code optimization and parallelization

    SciTech Connect

    Daeubler, M.; Trost, N.; Jimenez, J.; Sanchez, V.

    2013-07-01

    DYNSUB is a high-fidelity coupled code system consisting of the reactor simulator DYN3D and the sub-channel code SUBCHANFLOW. It describes nuclear reactor core behavior with pin-by-pin resolution for both steady-state and transient scenarios. In the course of the coupled code system's active development, super-homogenization (SPH) and generalized equivalence theory (GET) discontinuity factors may be computed with and employed in DYNSUB to compensate pin level homogenization errors. Because of the largely increased numerical problem size for pin-by-pin simulations, DYNSUB has bene fitted from HPC techniques to improve its numerical performance. DYNSUB's coupling scheme has been structurally revised. Computational bottlenecks have been identified and parallelized for shared memory systems using OpenMP. Comparing the elapsed time for simulating a PWR core with one-eighth symmetry under hot zero power conditions applying the original and the optimized DYNSUB using 8 cores, overall speed up factors greater than 10 have been observed. The corresponding reduction in execution time enables a routine application of DYNSUB to study pin level safety parameters for engineering sized cases in a scientific environment. (authors)

  12. Optimization of Coded Aperture Radioscintigraphy for Sentinel Lymph Node Mapping

    PubMed Central

    Fujii, Hirofumi; Idoine, John D.; Gioux, Sylvain; Accorsi, Roberto; Slochower, David R.; Lanza, Richard C.; Frangioni, John V.

    2011-01-01

    Purpose Radioscintigraphic imaging during sentinel lymph node (SLN) mapping could potentially improve localization; however, parallel-hole collimators have certain limitations. In this study, we explored the use of coded aperture (CA) collimators. Procedures Equations were derived for the six major dependent variables of CA collimators (i.e., masks) as a function of the ten major independent variables, and an optimized mask was fabricated. After validation, dual-modality CA and near-infrared (NIR) fluorescence SLN mapping was performed in pigs. Results Mask optimization required the judicious balance of competing dependent variables, resulting in sensitivity of 0.35%, XY resolution of 2.0 mm, and Z resolution of 4.2 mm at an 11.5 cm FOV. Findings in pigs suggested that NIR fluorescence imaging and CA radioscintigraphy could be complementary, but present difficult technical challenges. Conclusions This study lays the foundation for using CA collimation for SLN mapping, and also exposes several problems that require further investigation. PMID:21567254

  13. Image-Guided Non-Local Dense Matching with Three-Steps Optimization

    NASA Astrophysics Data System (ADS)

    Huang, Xu; Zhang, Yongjun; Yue, Zhaoxi

    2016-06-01

    This paper introduces a new image-guided non-local dense matching algorithm that focuses on how to solve the following problems: 1) mitigating the influence of vertical parallax to the cost computation in stereo pairs; 2) guaranteeing the performance of dense matching in homogeneous intensity regions with significant disparity changes; 3) limiting the inaccurate cost propagated from depth discontinuity regions; 4) guaranteeing that the path between two pixels in the same region is connected; and 5) defining the cost propagation function between the reliable pixel and the unreliable pixel during disparity interpolation. This paper combines the Census histogram and an improved histogram of oriented gradient (HOG) feature together as the cost metrics, which are then aggregated based on a new iterative non-local matching method and the semi-global matching method. Finally, new rules of cost propagation between the valid pixels and the invalid pixels are defined to improve the disparity interpolation results. The results of our experiments using the benchmarks and the Toronto aerial images from the International Society for Photogrammetry and Remote Sensing (ISPRS) show that the proposed new method can outperform most of the current state-of-the-art stereo dense matching methods.

  14. Pressure distribution based optimization of phase-coded acoustical vortices

    SciTech Connect

    Zheng, Haixiang; Gao, Lu; Dai, Yafei; Ma, Qingyu; Zhang, Dong

    2014-02-28

    Based on the acoustic radiation of point source, the physical mechanism of phase-coded acoustical vortices is investigated with formulae derivations of acoustic pressure and vibration velocity. Various factors that affect the optimization of acoustical vortices are analyzed. Numerical simulations of the axial, radial, and circular pressure distributions are performed with different source numbers, frequencies, and axial distances. The results prove that the acoustic pressure of acoustical vortices is linearly proportional to the source number, and lower fluctuations of circular pressure distributions can be produced for more sources. With the increase of source frequency, the acoustic pressure of acoustical vortices increases accordingly with decreased vortex radius. Meanwhile, increased vortex radius with reduced acoustic pressure is also achieved for longer axial distance. With the 6-source experimental system, circular and radial pressure distributions at various frequencies and axial distances have been measured, which have good agreements with the results of numerical simulations. The favorable results of acoustic pressure distributions provide theoretical basis for further studies of acoustical vortices.

  15. Hydrodynamic Optimization Method and Design Code for Stall-Regulated Hydrokinetic Turbine Rotors

    SciTech Connect

    Sale, D.; Jonkman, J.; Musial, W.

    2009-08-01

    This report describes the adaptation of a wind turbine performance code for use in the development of a general use design code and optimization method for stall-regulated horizontal-axis hydrokinetic turbine rotors. This rotor optimization code couples a modern genetic algorithm and blade-element momentum performance code in a user-friendly graphical user interface (GUI) that allows for rapid and intuitive design of optimal stall-regulated rotors. This optimization method calculates the optimal chord, twist, and hydrofoil distributions which maximize the hydrodynamic efficiency and ensure that the rotor produces an ideal power curve and avoids cavitation. Optimizing a rotor for maximum efficiency does not necessarily create a turbine with the lowest cost of energy, but maximizing the efficiency is an excellent criterion to use as a first pass in the design process. To test the capabilities of this optimization method, two conceptual rotors were designed which successfully met the design objectives.

  16. Stability of the genetic code and optimal parameters of amino acids.

    PubMed

    Chechetkin, V R; Lobzin, V V

    2011-01-21

    The standard genetic code is known to be much more efficient in minimizing adverse effects of misreading errors and one-point mutations in comparison with a random code having the same structure, i.e. the same number of codons coding for each particular amino acid. We study the inverse problem, how the code structure affects the optimal physico-chemical parameters of amino acids ensuring the highest stability of the genetic code. It is shown that the choice of two or more amino acids with given properties determines unambiguously all the others. In this sense the code structure determines strictly the optimal parameters of amino acids or the corresponding scales may be derived directly from the genetic code. In the code with the structure of the standard genetic code the resulting values for hydrophobicity obtained in the scheme "leave one out" and in the scheme with fixed maximum and minimum parameters correlate significantly with the natural scale. The comparison of the optimal and natural parameters allows assessing relative impact of physico-chemical and error-minimization factors during evolution of the genetic code. As the resulting optimal scale depends on the choice of amino acids with given parameters, the technique can also be applied to testing various scenarios of the code evolution with increasing number of codified amino acids. Our results indicate the co-evolution of the genetic code and physico-chemical properties of recruited amino acids. PMID:20955716

  17. Optimal Near-Hitless Network Failure Recovery Using Diversity Coding

    ERIC Educational Resources Information Center

    Avci, Serhat Nazim

    2013-01-01

    Link failures in wide area networks are common and cause significant data losses. Mesh-based protection schemes offer high capacity efficiency but they are slow, require complex signaling, and instable. Diversity coding is a proactive coding-based recovery technique which offers near-hitless (sub-ms) restoration with a competitive spare capacity…

  18. Stochastic dynamic programming for reservoir optimal control: Dense discretization and inflow correlation assumption made possible by parallel computing

    NASA Astrophysics Data System (ADS)

    Piccardi, Carlo; Soncini-Sessa, Rodolfo

    1991-05-01

    The solution via dynamic programming (DP) of a reservoir optimal control problem is often computationally prohibitive when the proper description of the inflow process leads to a system model having several state variables and/or when a sufficiently dense state discretization is required to achieve numerical accuracy. Thus, to simplify, the inflow correlation is usually neglected and/or a coarse state discretization is adopted. However, these simplifications may significantly affect the reliability of the solution of the optimization problem. Nowadays, the availability of very powerful computers based on innovative architectures (vector and parallel machines), even in the domain of personal computers (transputer architectures), stimulates the reformulation of the standard dynamic programming algorithm in a form able to exploit these new machine architectures. The reformulated DP algorithm and new machines enable faster and less costly solution of optimization problems involving a system model having two state variables (storage and previous period inflow, then taking into account the inflow correlation) and a number of states (of the order of 104) such as to guarantee a high numerical accuracy.

  19. Efficacy of Code Optimization on Cache-based Processors

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    The current common wisdom in the U.S. is that the powerful, cost-effective supercomputers of tomorrow will be based on commodity (RISC) micro-processors with cache memories. Already, most distributed systems in the world use such hardware as building blocks. This shift away from vector supercomputers and towards cache-based systems has brought about a change in programming paradigm, even when ignoring issues of parallelism. Vector machines require inner-loop independence and regular, non-pathological memory strides (usually this means: non-power-of-two strides) to allow efficient vectorization of array operations. Cache-based systems require spatial and temporal locality of data, so that data once read from main memory and stored in high-speed cache memory is used optimally before being written back to main memory. This means that the most cache-friendly array operations are those that feature zero or unit stride, so that each unit of data read from main memory (a cache line) contains information for the next iteration in the loop. Moreover, loops ought to be 'fat', meaning that as many operations as possible are performed on cache data-provided instruction caches do not overflow and enough registers are available. If unit stride is not possible, for example because of some data dependency, then care must be taken to avoid pathological strides, just ads on vector computers. For cache-based systems the issues are more complex, due to the effects of associativity and of non-unit block (cache line) size. But there is more to the story. Most modern micro-processors are superscalar, which means that they can issue several (arithmetic) instructions per clock cycle, provided that there are enough independent instructions in the loop body. This is another argument for providing fat loop bodies. With these restrictions, it appears fairly straightforward to produce code that will run efficiently on any cache-based system. It can be argued that although some of the important

  20. Optimization of Ambient Noise Cross-Correlation Imaging Across Large Dense Array

    NASA Astrophysics Data System (ADS)

    Sufri, O.; Xie, Y.; Lin, F. C.; Song, W.

    2015-12-01

    Ambient Noise Tomography is currently one of the most studied topics of seismology. It gives possibility of studying physical properties of rocks from the depths of subsurface to the upper mantle depths using recorded noise sources. A network of new seismic sensors, which are capable of recording continuous seismic noise and doing the processing at the same time on-site, could help to assess possible risk of volcanic activity on a volcano and help to understand the changes in physical properties of a fault before and after an earthquake occurs. This new seismic sensor technology could also be used in oil and gas industry to figure out depletion rate of a reservoir and help to improve velocity models for obtaining better seismic reflection cross-sections. Our recent NSF funded project is bringing seismologists, signal processors, and computer scientists together to develop a new ambient noise seismic imaging system which could record continuous seismic noise and process it on-site and send Green's functions and/or tomography images to the network. Such an imaging system requires optimum amount of sensors, sensor communication, and processing of the recorded data. In order to solve these problems, we first started working on the problem of optimum amount of sensors and the communication between these sensors by using small aperture dense network called Sweetwater Array, deployed by Nodal Seismic in 2014. We downloaded ~17 day of continuous data from 2268 one-component stations between March 30-April 16 2015 from IRIS DMC and performed cross-correlation to determine the lag times between station pairs. The lag times were then entered in matrix form. Our goal is to selecting random lag time values in the matrix and assuming all other elements of the matrix either missing or unknown and performing matrix completion technique to find out how close the results from matrix completion technique would be close to the real calculated values. This would give us better idea

  1. Experimental qualification of a code for optimizing gamma irradiation facilities

    NASA Astrophysics Data System (ADS)

    Mosse, D. C.; Leizier, J. J. M.; Keraron, Y.; Lallemant, T. F.; Perdriau, P. D. M.

    Dose computation codes are a prerequisite for the design of gamma irradiation facilities. Code quality is a basic factor in the achievement of sound economic and technical performance by the facility. This paper covers the validation of a code by reference dosimetry experiments. Developed by the "Société Générale pour les Techniques Nouvelles" (SGN), a supplier of irradiation facilities and member of the CEA Group, the code is currently used by that company. (ERHART, KERARON, 1986) Experimental data were obtained under conditions representative of those prevailing in the gamma irradiation of foodstuffs. Irradiation was performed in POSEIDON, a Cobalt 60 cell of ORIS-I. Several Cobalt 60 rods of known activity are arranged in a planar array typical of industrial irradiation facilities. Pallet density is uniform, ranging from 0 (air) to 0.6. Reference dosimetry measurements were performed by the "Laboratoire de Métrologie des Rayonnements Ionisants" (LMRI) of the "Bureau National de Métrologie" (BNM). The procedure is based on the positioning of more than 300 ESR/alanine dosemeters throughout the various target volumes used. The reference quantity was the absorbed dose in water. The code was validated by a comparison of experimental and computed data. It has proved to be an effective tool for the design of facilities meeting the specific requirements applicable to foodstuff irradiation, which are frequently found difficult to meet.

  2. Quantum circuit for optimal eavesdropping in quantum key distribution using phase-time coding

    SciTech Connect

    Kronberg, D. A.; Molotkov, S. N.

    2010-07-15

    A quantum circuit is constructed for optimal eavesdropping on quantum key distribution proto- cols using phase-time coding, and its physical implementation based on linear and nonlinear fiber-optic components is proposed.

  3. Emergence of optimal decoding of population codes through STDP.

    PubMed

    Habenschuss, Stefan; Puhr, Helmut; Maass, Wolfgang

    2013-06-01

    The brain faces the problem of inferring reliable hidden causes from large populations of noisy neurons, for example, the direction of a moving object from spikes in area MT. It is known that a theoretically optimal likelihood decoding could be carried out by simple linear readout neurons if weights of synaptic connections were set to certain values that depend on the tuning functions of sensory neurons. We show here that such theoretically optimal readout weights emerge autonomously through STDP in conjunction with lateral inhibition between readout neurons. In particular, we identify a class of optimal STDP learning rules with homeostatic plasticity, for which the autonomous emergence of optimal readouts can be explained on the basis of a rigorous learning theory. This theory shows that the network motif we consider approximates expectation-maximization for creating internal generative models for hidden causes of high-dimensional spike inputs. Notably, we find that this optimal functionality can be well approximated by a variety of STDP rules beyond those predicted by theory. Furthermore, we show that this learning process is very stable and automatically adjusts weights to changes in the number of readout neurons, the tuning functions of sensory neurons, and the statistics of external stimuli. PMID:23517096

  4. A new algorithm for optimizing the wavelength coverage for spectroscopic studies: Spectral Wavelength Optimization Code (SWOC)

    NASA Astrophysics Data System (ADS)

    Ruchti, G. R.; Feltzing, S.; Lind, K.; Caffau, E.; Korn, A. J.; Schnurr, O.; Hansen, C. J.; Koch, A.; Sbordone, L.; de Jong, R. S.

    2016-06-01

    The past decade and a half has seen the design and execution of several ground-based spectroscopic surveys, both Galactic and Extra-galactic. Additionally, new surveys are being designed that extend the boundaries of current surveys. In this context, many important considerations must be done when designing a spectrograph for the future. Among these is the determination of the optimum wavelength coverage. In this work, we present a new code for determining the wavelength ranges that provide the optimal amount of information to achieve the required science goals for a given survey. In its first mode, it utilizes a user-defined list of spectral features to compute a figure-of-merit for different spectral configurations. The second mode utilizes a set of flux-calibrated spectra, determining the spectral regions that show the largest differences among the spectra. Our algorithm is easily adaptable for any set of science requirements and any spectrograph design. We apply the algorithm to several examples, including 4MOST, showing the method yields important design constraints to the wavelength regions.

  5. A new algorithm for optimizing the wavelength coverage for spectroscopic studies: Spectral Wavelength Optimization Code (SWOC)

    NASA Astrophysics Data System (ADS)

    Ruchti, G. R.; Feltzing, S.; Lind, K.; Caffau, E.; Korn, A. J.; Schnurr, O.; Hansen, C. J.; Koch, A.; Sbordone, L.; de Jong, R. S.

    2016-09-01

    The past decade and a half has seen the design and execution of several ground-based spectroscopic surveys, both Galactic and Extragalactic. Additionally, new surveys are being designed that extend the boundaries of current surveys. In this context, many important considerations must be done when designing a spectrograph for the future. Among these is the determination of the optimum wavelength coverage. In this work, we present a new code for determining the wavelength ranges that provide the optimal amount of information to achieve the required science goals for a given survey. In its first mode, it utilizes a user-defined list of spectral features to compute a figure-of-merit for different spectral configurations. The second mode utilizes a set of flux-calibrated spectra, determining the spectral regions that show the largest differences among the spectra. Our algorithm is easily adaptable for any set of science requirements and any spectrograph design. We apply the algorithm to several examples, including 4MOST, showing the method yields important design constraints to the wavelength regions.

  6. Context-based lossless image compression with optimal codes for discretized Laplacian distributions

    NASA Astrophysics Data System (ADS)

    Giurcaneanu, Ciprian Doru; Tabus, Ioan; Stanciu, Cosmin

    2003-05-01

    Lossless image compression has become an important research topic, especially in relation with the JPEG-LS standard. Recently, the techniques known for designing optimal codes for sources with infinite alphabets have been applied for the quantized Laplacian sources which have probability mass functions with two geometrically decaying tails. Due to the simple parametric model of the source distribution the Huffman iterations are possible to be carried out analytically, using the concept of reduced source, and the final codes are obtained as a sequence of very simple arithmetic operations, avoiding the need to store coding tables. We propose the use of these (optimal) codes in conjunction with context-based prediction, for noiseless compression of images. To reduce further the average code length, we design Escape sequences to be employed when the estimation of the distribution parameter is unreliable. Results on standard test files show improvements in compression ratio when comparing with JPEG-LS.

  7. Wireless image transmission using turbo codes and optimal unequal error protection.

    PubMed

    Thomos, Nikolaos; Boulgouris, Nikolaos V; Strintzis, Michael G

    2005-11-01

    A novel image transmission scheme is proposed for the communication of set partitioning in hierarchical trees image streams over wireless channels. The proposed scheme employs turbo codes and Reed-Solomon codes in order to deal effectively with burst errors. An algorithm for the optimal unequal error protection of the compressed bitstream is also proposed and applied in conjunction with an inherently more efficient technique for product code decoding. The resulting scheme is tested for the transmission of images over wireless channels. Experimental evaluation clearly demonstrates the superiority of the proposed transmission system in comparison to well-known robust coding schemes. PMID:16279187

  8. Product code optimization for determinate state LDPC decoding in robust image transmission.

    PubMed

    Thomos, Nikolaos; Boulgouris, Nikolaos V; Strintzis, Michael G

    2006-08-01

    We propose a novel scheme for error-resilient image transmission. The proposed scheme employs a product coder consisting of low-density parity check (LDPC) codes and Reed-Solomon codes in order to deal effectively with bit errors. The efficiency of the proposed scheme is based on the exploitation of determinate symbols in Tanner graph decoding of LDPC codes and a novel product code optimization technique based on error estimation. Experimental evaluation demonstrates the superiority of the proposed system in comparison to recent state-of-the-art techniques for image transmission. PMID:16900669

  9. DENSE MEDIA CYCLONE OPTIMIZATION

    SciTech Connect

    Gerald H. Luttrell

    2001-09-10

    The project start date delayed by approximately 7 weeks due to contractual difficulties. Although the original start date was December 14, 2000, the Principal Investigator did not receive the Project Authorization Notice (PAN) from the Virginia Tech Office of Sponsored Programs until February 5, 2001. Therefore, the first project task (i. e., Project Planning) did not begin until February 2001. Activities completed as part of this effort included: (i) revision and updating of the Project Work Plan, (ii) preparation of equipment procurement documents for the Virginia Tech Purchasing Office, and (iii) initiation of preliminary site visits to several coal preparation plants to discuss test work with industrial personnel. After a brief (2 month) contractual delay, project activities are now underway. There are currently no contractual issues or technical problems associated with this project. Project work activities are now expected to proceed in accordance with the proposed project schedule.

  10. DENSE MEDIA CYCLONE OPTIMIZATION

    SciTech Connect

    David M. Hyman

    2002-01-14

    All work associated with Task 1 (Baseline Assessment) was successfully completed and preliminary corrections/recommendations were provided back to the management at each test site. Detailed float-sink tests were completed for Site No.1 and are currently underway for Sites No.2-No. 4. Unfortunately, the work associated with sample analyses (Task 4--Sample Analysis) has been delayed because of a backlog of coal samples at the commercial laboratory participating in this project. As a result, a no-cost project time extension may be necessary in order to complete the project. A decision will be made at the end of the next reporting period. Some of the work completed this quarter included (i) development of mass balance routines for data analysis, (ii) formulation of an expert system rule base, (iii) completion of statistical computations and mathematical curve fits for the density tracer test data. In addition, an ''O & M Checklist'' was prepared to provide plant operators with simple operating and maintenance guidelines that must be followed to obtain good HMC performance.

  11. DENSE MEDIA CYCLONE OPTIMIZATION

    SciTech Connect

    Gerald H. Luttrell

    2001-09-10

    The fieldwork associated with Task 1 (Baseline Assessment) was completed this quarter. Detailed cyclone inspections completed at all but one plant during maintenance shifts. Analysis of the test samples is also currently underway in Task 4 (Sample Analysis). A Draft Recommendation was prepared for the management at each test site in Task 2 (Circuit Modification). All required procurements were completed. Density tracers were manufactured and tested for quality control purposes. Special sampling tools were also purchased and/or fabricated for each plant site. The preliminary experimental data show that the partitioning performance for all seven HMC circuits was generally good. This was attributed to well-maintained cyclones and good operating practices. However, the density tracers detected that most circuits suffered from poor control of media cutpoint. These problems were attributed to poor x-ray calibration and improper manual density measurements. These conclusions will be validated after the analyses of the composite samples have been completed.

  12. DENSE MEDIA CYCLONE OPTIMIZATION

    SciTech Connect

    Gerald H. Luttrell

    2002-09-14

    All project activities are now winding down. Follow-up tracer tests were conducted at several of the industrial test sites and analysis of the experimental data is currently underway. All required field work was completed during this quarter. In addition, the heavy medium cyclone simulation and expert system programs are nearly completed and user manuals are being prepared. Administrative activities (e.g., project documents, cost-sharing accounts, etc.) are being reviewed and prepared for final submission to DOE. All project reporting requirements are up to date. All financial expenditures are within approved limits.

  13. DENSE MEDIA CYCLONE OPTIMIZATION

    SciTech Connect

    Gerald H. Luttrell

    2003-01-15

    All technical project activities have been successfully completed. This effort included (1) completion of field testing using density tracers, (2) development of a spreadsheet based HMC simulation program, and (3) preparation of a menu-driven expert system for HMC trouble-shooting. The final project report is now being prepared for submission to DOE comment and review. The submission has been delayed due to difficulties in compiling the large base of technical information generated by the project. Technical personnel are now working to complete this report. Effort is being underway to finalize the financial documents necessary to demonstrate that the cost-sharing requirements for the project have been met.

  14. DENSE MEDIA CYCLONE OPTIMIZATION

    SciTech Connect

    Gerald H. Luttrell

    2003-09-09

    All technical project activities have been successfully completed. This effort included (1) completion of field testing using density tracers, (2) development of a spreadsheet based HMC simulation program, and (3) preparation of a menu-driven expert system for HMC trouble-shooting. The final project report is now being prepared for submission to DOE comment and review. The submission has been delayed due to difficulties in compiling the large base of technical information generated by the project. Technical personnel are now working to complete this report. Effort is being underway to finalize the financial documents necessary to demonstrate that the cost-sharing requirements for the project have been met.

  15. Joint optimization of run-length coding, Huffman coding, and quantization table with complete baseline JPEG decoder compatibility.

    PubMed

    Yang, En-hui; Wang, Longji

    2009-01-01

    To maximize rate distortion performance while remaining faithful to the JPEG syntax, the joint optimization of the Huffman tables, quantization step sizes, and DCT indices of a JPEG encoder is investigated. Given Huffman tables and quantization step sizes, an efficient graph-based algorithm is first proposed to find the optimal DCT indices in the form of run-size pairs. Based on this graph-based algorithm, an iterative algorithm is then presented to jointly optimize run-length coding, Huffman coding, and quantization table selection. The proposed iterative algorithm not only results in a compressed bitstream completely compatible with existing JPEG and MPEG decoders, but is also computationally efficient. Furthermore, when tested over standard test images, it achieves the best JPEG compression results, to the extent that its own JPEG compression performance even exceeds the quoted PSNR results of some state-of-the-art wavelet-based image coders such as Shapiro's embedded zerotree wavelet algorithm at the common bit rates under comparison. Both the graph-based algorithm and the iterative algorithm can be applied to application areas such as web image acceleration, digital camera image compression, MPEG frame optimization, and transcoding, etc. PMID:19095519

  16. Demonstration of Automatically-Generated Adjoint Code for Use in Aerodynamic Shape Optimization

    NASA Technical Reports Server (NTRS)

    Green, Lawrence; Carle, Alan; Fagan, Mike

    1999-01-01

    Gradient-based optimization requires accurate derivatives of the objective function and constraints. These gradients may have previously been obtained by manual differentiation of analysis codes, symbolic manipulators, finite-difference approximations, or existing automatic differentiation (AD) tools such as ADIFOR (Automatic Differentiation in FORTRAN). Each of these methods has certain deficiencies, particularly when applied to complex, coupled analyses with many design variables. Recently, a new AD tool called ADJIFOR (Automatic Adjoint Generation in FORTRAN), based upon ADIFOR, was developed and demonstrated. Whereas ADIFOR implements forward-mode (direct) differentiation throughout an analysis program to obtain exact derivatives via the chain rule of calculus, ADJIFOR implements the reverse-mode counterpart of the chain rule to obtain exact adjoint form derivatives from FORTRAN code. Automatically-generated adjoint versions of the widely-used CFL3D computational fluid dynamics (CFD) code and an algebraic wing grid generation code were obtained with just a few hours processing time using the ADJIFOR tool. The codes were verified for accuracy and were shown to compute the exact gradient of the wing lift-to-drag ratio, with respect to any number of shape parameters, in about the time required for 7 to 20 function evaluations. The codes have now been executed on various computers with typical memory and disk space for problems with up to 129 x 65 x 33 grid points, and for hundreds to thousands of independent variables. These adjoint codes are now used in a gradient-based aerodynamic shape optimization problem for a swept, tapered wing. For each design iteration, the optimization package constructs an approximate, linear optimization problem, based upon the current objective function, constraints, and gradient values. The optimizer subroutines are called within a design loop employing the approximate linear problem until an optimum shape is found, the design loop

  17. Signal-to-noise-optimal scaling of heterogenous population codes.

    PubMed

    Leibold, Christian

    2013-01-01

    Similarity measures for neuronal population responses that are based on scalar products can be little informative if the neurons have different firing statistics. Based on signal-to-noise optimality, this paper derives positive weighting factors for the individual neurons' response rates in a heterogeneous neuronal population. The weights only depend on empirical statistics. If firing follows Poisson statistics, the weights can be interpreted as mutual information per spike. The scaling is shown to improve linear separability and clustering as compared to unscaled inputs. PMID:23984844

  18. A novel multivariate performance optimization method based on sparse coding and hyper-predictor learning.

    PubMed

    Yang, Jiachen; Ding, Zhiyong; Guo, Fei; Wang, Huogen; Hughes, Nick

    2015-11-01

    In this paper, we investigate the problem of optimization of multivariate performance measures, and propose a novel algorithm for it. Different from traditional machine learning methods which optimize simple loss functions to learn prediction function, the problem studied in this paper is how to learn effective hyper-predictor for a tuple of data points, so that a complex loss function corresponding to a multivariate performance measure can be minimized. We propose to present the tuple of data points to a tuple of sparse codes via a dictionary, and then apply a linear function to compare a sparse code against a given candidate class label. To learn the dictionary, sparse codes, and parameter of the linear function, we propose a joint optimization problem. In this problem, the both the reconstruction error and sparsity of sparse code, and the upper bound of the complex loss function are minimized. Moreover, the upper bound of the loss function is approximated by the sparse codes and the linear function parameter. To optimize this problem, we develop an iterative algorithm based on descent gradient methods to learn the sparse codes and hyper-predictor parameter alternately. Experiment results on some benchmark data sets show the advantage of the proposed methods over other state-of-the-art algorithms. PMID:26291045

  19. Dispersion-optimized optical fiber for high-speed long-haul dense wavelength division multiplexing transmission

    NASA Astrophysics Data System (ADS)

    Wu, Jindong; Chen, Liuhua; Li, Qingguo; Wu, Wenwen; Sun, Keyuan; Wu, Xingkun

    2011-07-01

    Four non-zero-dispersion-shifted fibers with almost the same large effective area (Aeff) and optimized dispersion properties are realized by novel index profile designing and modified vapor axial deposition and modified chemical vapor deposition processes. An Aeff of greater than 71 μm2 is obtained for the designed fibers. Three of the developed fibers with positive dispersion are improved by reducing the 1550nm dispersion slope from 0.072ps/nm2/km to 0.063ps/nm2/km or 0.05ps/nm2/km, increasing the 1550nm dispersion from 4.972ps/nm/km to 5.679ps/nm/km or 7.776ps/nm/km, and shifting the zero-dispersion wavelength from 1500nm to 1450nm. One of these fibers is in good agreement with G655D and G.656 fibers simultaneously, and another one with G655E and G.656 fibers; both fibers are beneficial to high-bit long-haul dense wavelength division multiplexing systems over S-, C-, and L-bands. The fourth developed fiber with negative dispersion is also improved by reducing the 1550nm dispersion slope from 0.12ps/nm2/km to 0.085ps/nm2/km, increasing the 1550nm dispersion from -4ps/nm/km to -6.016ps/nm/km, providing facilities for a submarine transmission system. Experimental measurements indicate that the developed fibers all have excellent optical transmission and good macrobending and splice performances.

  20. DOPEX-1D2C: A one-dimensional, two-constraint radiation shield optimization code

    NASA Technical Reports Server (NTRS)

    Lahti, G. P.

    1973-01-01

    A one-dimensional, two-constraint radiation sheild weight optimization procedure and a computer program, DOPEX-1D2C, is described. The DOPEX-1D2C uses the steepest descent method to alter a set of initial (input) thicknesses of a spherical shield configuration to achieve a minimum weight while simultaneously satisfying two dose-rate constraints. The code assumes an exponential dose-shield thickness relation with parameters specified by the user. Code input instruction, a FORTRAN-4 listing, and a sample problem are given. Typical computer time required to optimize a seven-layer shield is less than 1/2 minute on an IBM 7094.

  1. An Optimization Multi-path Inter-Session Network Coding in Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Xia, Zhuo-Qun; Liu, Chao; Zhu, Xue-Han; Liu, Pin-Chao; Xie, Li-Tong

    Wireless sensor networks (wsns) typically provide several paths from a source to a destination, and by using such paths efficiently. This has the potential not only to increase multiplicatively the achieved end-to-end rate, but also to provide robustness against performance fluctuations of any single link in the system. Network coding is a new technique which improves the network performance. This paper we analyze how to using network coding according to the characteristic of multi-path routing in the wsns. As a result, an optimization multi-path inter-session network coding is designed to improve the wsns performance.

  2. On the optimality of code options for a universal noiseless coder

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Rice, Robert F.; Miller, Warner

    1991-01-01

    A universal noiseless coding structure was developed that provides efficient performance over an extremely broad range of source entropy. This is accomplished by adaptively selecting the best of several easily implemented variable length coding algorithms. Custom VLSI coder and decoder modules capable of processing over 20 million samples per second are currently under development. The first of the code options used in this module development is shown to be equivalent to a class of Huffman code under the Humblet condition, other options are shown to be equivalent to the Huffman codes of a modified Laplacian symbol set, at specified symbol entropy values. Simulation results are obtained on actual aerial imagery, and they confirm the optimality of the scheme. On sources having Gaussian or Poisson distributions, coder performance is also projected through analysis and simulation.

  3. CodHonEditor: Spreadsheets for Codon Optimization and Editing of Protein Coding Sequences.

    PubMed

    Takai, Kazuyuki

    2016-05-01

    Gene synthesis is getting more important with the growing availability of low-cost commercial services. The coding sequences are often "optimized" as for the relative synonymous codon usage (RSCU) before synthesis, which is generally included in the commercial services. However, the codon optimization processes are different among different providers and are often hidden from the users. Here, the d'Hondt method, which is widely adopted as a method for determining the number of seats for each party in proportional-representation public elections, is applied to RSCU fitting. This allowed me to make a set of electronic spreadsheets for manual design of protein coding sequences for expression in Escherichia coli, with which users can see the process of codon optimization and can manually edit the codons after the automatic optimization. The spreadsheets may also be useful for molecular biology education. PMID:27002987

  4. Optimal Multicarrier Phase-Coded Waveform Design for Detection of Extended Targets

    SciTech Connect

    Sen, Satyabrata; Glover, Charles Wayne

    2013-01-01

    We design a parametric multicarrier phase-coded (MCPC) waveform that achieves the optimal performance in detecting an extended target in the presence of signal-dependent interference. Traditional waveform design techniques provide only the optimal energy spectral density of the transmit waveform and suffer a performance loss in the synthesis process of the time-domain signal. Therefore, we opt for directly designing an MCPC waveform in terms of its time-frequency codes to obtain the optimal detection performance. First, we describe the modeling assumptions considering an extended target buried within the signal-dependent clutter with known power spectral density, and deduce the performance characteristics of the optimal detector. Then, considering an MCPC signal transmission, we express the detection characteristics in terms of the phase-codes of the MCPC waveform and propose to optimally design the MCPC signal by maximizing the detection probability. Our numerical results demonstrate that the designed MCPC signal attains the optimal detection performance and requires a lesser computational time than the other parametric waveform design approach.

  5. A coded aperture imaging system optimized for hard X-ray and gamma ray astronomy

    NASA Technical Reports Server (NTRS)

    Gehrels, N.; Cline, T. L.; Huters, A. F.; Leventhal, M.; Maccallum, C. J.; Reber, J. D.; Stang, P. D.; Teegarden, B. J.; Tueller, J.

    1985-01-01

    A coded aperture imaging system was designed for the Gamma-Ray imaging spectrometer (GRIS). The system is optimized for imaging 511 keV positron-annihilation photons. For a galactic center 511-keV source strength of 0.001 sq/s, the source location accuracy is expected to be + or - 0.2 deg.

  6. Liner Optimization Studies Using the Ducted Fan Noise Prediction Code TBIEM3D

    NASA Technical Reports Server (NTRS)

    Dunn, M. H.; Farassat, F.

    1998-01-01

    In this paper we demonstrate the usefulness of the ducted fan noise prediction code TBIEM3D as a liner optimization design tool. Boundary conditions on the interior duct wall allow for hard walls or a locally reacting liner with axially segmented, circumferentially uniform impedance. Two liner optimization studies are considered in which farfield noise attenuation due to the presence of a liner is maximized by adjusting the liner impedance. In the first example, the dependence of optimal liner impedance on frequency and liner length is examined. Results show that both the optimal impedance and attenuation levels are significantly influenced by liner length and frequency. In the second example, TBIEM3D is used to compare radiated sound pressure levels between optimal and non-optimal liner cases at conditions designed to simulate take-off. It is shown that significant noise reduction is achieved for most of the sound field by selecting the optimal or near optimal liner impedance. Our results also indicate that there is relatively large region of the impedance plane over which optimal or near optimal liner behavior is attainable. This is an important conclusion for the designer since there are variations in liner characteristics due to manufacturing imprecisions.

  7. A wavelet-based neural model to optimize and read out a temporal population code

    PubMed Central

    Luvizotto, Andre; Rennó-Costa, César; Verschure, Paul F. M. J.

    2012-01-01

    It has been proposed that the dense excitatory local connectivity of the neo-cortex plays a specific role in the transformation of spatial stimulus information into a temporal representation or a temporal population code (TPC). TPC provides for a rapid, robust, and high-capacity encoding of salient stimulus features with respect to position, rotation, and distortion. The TPC hypothesis gives a functional interpretation to a core feature of the cortical anatomy: its dense local and sparse long-range connectivity. Thus far, the question of how the TPC encoding can be decoded in downstream areas has not been addressed. Here, we present a neural circuit that decodes the spectral properties of the TPC using a biologically plausible implementation of a Haar transform. We perform a systematic investigation of our model in a recognition task using a standardized stimulus set. We consider alternative implementations using either regular spiking or bursting neurons and a range of spectral bands. Our results show that our wavelet readout circuit provides for the robust decoding of the TPC and further compresses the code without loosing speed or quality of decoding. We show that in the TPC signal the relevant stimulus information is present in the frequencies around 100 Hz. Our results show that the TPC is constructed around a small number of coding components that can be well decoded by wavelet coefficients in a neuronal implementation. The solution to the TPC decoding problem proposed here suggests that cortical processing streams might well consist of sequential operations where spatio-temporal transformations at lower levels forming a compact stimulus encoding using TPC that are subsequently decoded back to a spatial representation using wavelet transforms. In addition, the results presented here show that different properties of the stimulus might be transmitted to further processing stages using different frequency components that are captured by appropriately tuned

  8. Multidimensional optimization of fusion reactors using heterogenous codes and engineering software

    NASA Astrophysics Data System (ADS)

    Hartwig, Zachary; Olynyk, Geoffrey; Whyte, Dennis

    2012-10-01

    Magnetic confinement fusion reactors are tightly coupled systems. The parameters under a designer's control, such as magnetic field, wall temperature, and blanket thickness, simultaneously affect the behavior, performance, and components of the reactor, leading to complex tradeoffs and design optimizations. In addition, the engineering analyses require non-trivial, self-consistent inputs, such as reactor geometry, to ensure high fidelity between the various physics and engineering design codes. We present a framework for analysis and multidimensional optimization of fusion reactor systems based on the coupling of heterogeneous codes and engineering software. While this approach is widely used in industry, most code-coupling efforts in fusion have been focused on plasma and edge physics. Instead, we use a simplified plasma model to concentrate on how fusion neutrons and heat transfer affect the design of the first wall, breeding blanket, and magnet systems. The framework combines solid modeling, neutronics, and engineering multiphysics codes and software, linked across Windows and Linux clusters. Initial results for optimizing the design of a compact, high-field tokamak reactor based on high-temperature demountable superconducting coils and a liquid blanket are presented.

  9. Optimal power allocation and joint source-channel coding for wireless DS-CDMA visual sensor networks

    NASA Astrophysics Data System (ADS)

    Pandremmenou, Katerina; Kondi, Lisimachos P.; Parsopoulos, Konstantinos E.

    2011-01-01

    In this paper, we propose a scheme for the optimal allocation of power, source coding rate, and channel coding rate for each of the nodes of a wireless Direct Sequence Code Division Multiple Access (DS-CDMA) visual sensor network. The optimization is quality-driven, i.e. the received quality of the video that is transmitted by the nodes is optimized. The scheme takes into account the fact that the sensor nodes may be imaging scenes with varying levels of motion. Nodes that image low-motion scenes will require a lower source coding rate, so they will be able to allocate a greater portion of the total available bit rate to channel coding. Stronger channel coding will mean that such nodes will be able to transmit at lower power. This will both increase battery life and reduce interference to other nodes. Two optimization criteria are considered. One that minimizes the average video distortion of the nodes and one that minimizes the maximum distortion among the nodes. The transmission powers are allowed to take continuous values, whereas the source and channel coding rates can assume only discrete values. Thus, the resulting optimization problem lies in the field of mixed-integer optimization tasks and is solved using Particle Swarm Optimization. Our experimental results show the importance of considering the characteristics of the video sequences when determining the transmission power, source coding rate and channel coding rate for the nodes of the visual sensor network.

  10. Final Report A Multi-Language Environment For Programmable Code Optimization and Empirical Tuning

    SciTech Connect

    Yi, Qing; Whaley, Richard Clint; Qasem, Apan; Quinlan, Daniel

    2013-11-23

    This report summarizes our effort and results of building an integrated optimization environment to effectively combine the programmable control and the empirical tuning of source-to-source compiler optimizations within the framework of multiple existing languages, specifically C, C++, and Fortran. The environment contains two main components: the ROSE analysis engine, which is based on the ROSE C/C++/Fortran2003 source-to-source compiler developed by Co-PI Dr.Quinlan et. al at DOE/LLNL, and the POET transformation engine, which is based on an interpreted program transformation language developed by Dr. Yi at University of Texas at San Antonio (UTSA). The ROSE analysis engine performs advanced compiler analysis, identifies profitable code transformations, and then produces output in POET, a language designed to provide programmable control of compiler optimizations to application developers and to support the parameterization of architecture-sensitive optimizations so that their configurations can be empirically tuned later. This POET output can then be ported to different machines together with the user application, where a POET-based search engine empirically reconfigures the parameterized optimizations until satisfactory performance is found. Computational specialists can write POET scripts to directly control the optimization of their code. Application developers can interact with ROSE to obtain optimization feedback as well as provide domain-specific knowledge and high-level optimization strategies. The optimization environment is expected to support different levels of automation and programmer intervention, from fully-automated tuning to semi-automated development and to manual programmable control.

  11. Optimization of energy saving device combined with a propeller using real-coded genetic algorithm

    NASA Astrophysics Data System (ADS)

    Ryu, Tomohiro; Kanemaru, Takashi; Kataoka, Shiro; Arihama, Kiyoshi; Yoshitake, Akira; Arakawa, Daijiro; Ando, Jun

    2014-06-01

    This paper presents a numerical optimization method to improve the performance of the propeller with Turbo-Ring using real-coded genetic algorithm. In the presented method, Unimodal Normal Distribution Crossover (UNDX) and Minimal Generation Gap (MGG) model are used as crossover operator and generation-alternation model, respectively. Propeller characteristics are evaluated by a simple surface panel method "SQCM" in the optimization process. Blade sections of the original Turbo-Ring and propeller are replaced by the NACA66 a = 0.8 section. However, original chord, skew, rake and maximum blade thickness distributions in the radial direction are unchanged. Pitch and maximum camber distributions in the radial direction are selected as the design variables. Optimization is conducted to maximize the efficiency of the propeller with Turbo-Ring. The experimental result shows that the efficiency of the optimized propeller with Turbo-Ring is higher than that of the original propeller with Turbo-Ring.

  12. The SWAN/NPSOL code system for multivariable multiconstraint shield optimization

    SciTech Connect

    Watkins, E.F.; Greenspan, E.

    1995-12-31

    SWAN is a useful code for optimization of source-driven systems, i.e., systems for which the neutron and photon distribution is the solution of the inhomogeneous transport equation. Over the years, SWAN has been applied to the optimization of a variety of nuclear systems, such as minimizing the thickness of fusion reactor blankets and shields, the weight of space reactor shields, the cost for an ICF target chamber shield, and the background radiation for explosive detection systems and maximizing the beam quality for boron neutron capture therapy applications. However, SWAN`s optimization module can handle up to a single constraint and was inefficient in handling problems with many variables. The purpose of this work is to upgrade SWAN`s optimization capability.

  13. On the Optimized Atomic Exchange Potential method and the CASSANDRA opacity code

    NASA Astrophysics Data System (ADS)

    Jeffery, M.; Harris, J. W. O.; Hoarty, D. J.

    2016-09-01

    The CASSANDRA, average atom, opacity code uses the local density approximation (LDA) to calculate electron exchange interactions and this introduces inaccuracies due to the inconsistent treatment of the Coulomb and exchange energy terms of the average total energy equation. To correct this inconsistency, the Optimized Atomic Central Potential Method (OPM) of calculating exchange interactions has been incorporated into CASSANDRA. The LDA and OPM formalisms are discussed and the reason for the discrepancy when using the LDA is highlighted. CASSANDRA uses a Taylor series expansion about an average atom when computing transition energies and uses Janak's Theorem to determine the Taylor series coefficients. Janak's Theorem does not apply to the OPM; however, a corollary to Janak's Theorem has been employed in the OPM implementation. A derivation of this corollary is provided. Results of simulations from CASSANDRA using the OPM are shown and compared against CASSANDRA LDA, DAVROS (a detailed term accounting opacity code), the GRASP2K atomic physics code and experimental data.

  14. Optimization of a coded aperture coherent scatter spectral imaging system for medical imaging

    NASA Astrophysics Data System (ADS)

    Greenberg, Joel A.; Lakshmanan, Manu N.; Brady, David J.; Kapadia, Anuj J.

    2015-03-01

    Coherent scatter X-ray imaging is a technique that provides spatially-resolved information about the molecular structure of the material under investigation, yielding material-specific contrast that can aid medical diagnosis and inform treatment. In this study, we demonstrate a coherent-scatter imaging approach based on the use of coded apertures (known as coded aperture coherent scatter spectral imaging1, 2) that enables fast, dose-efficient, high-resolution scatter imaging of biologically-relevant materials. Specifically, we discuss how to optimize a coded aperture coherent scatter imaging system for a particular set of objects and materials, describe and characterize our experimental system, and use the system to demonstrate automated material detection in biological tissue.

  15. From Physics Model to Results: An Optimizing Framework for Cross-Architecture Code Generation

    DOE PAGESBeta

    Blazewicz, Marek; Hinder, Ian; Koppelman, David M.; Brandt, Steven R.; Ciznicki, Milosz; Kierzynka, Michal; Löffler, Frank; Schnetter, Erik; Tao, Jian

    2013-01-01

    Starting from a high-level problem description in terms of partial differential equations using abstract tensor notation, the Chemora framework discretizes, optimizes, and generates complete high performance codes for a wide range of compute architectures. Chemora extends the capabilities of Cactus, facilitating the usage of large-scale CPU/GPU systems in an efficient manner for complex applications, without low-level code tuning. Chemora achieves parallelism through MPI and multi-threading, combining OpenMP and CUDA. Optimizations include high-level code transformations, efficient loop traversal strategies, dynamically selected data and instruction cache usage strategies, and JIT compilation of GPU code tailored to the problem characteristics. The discretizationmore » is based on higher-order finite differences on multi-block domains. Chemora's capabilities are demonstrated by simulations of black hole collisions. This problem provides an acid test of the framework, as the Einstein equations contain hundreds of variables and thousands of terms.« less

  16. A unified framework of unsupervised subjective optimized bit allocation for multiple video object coding

    NASA Astrophysics Data System (ADS)

    Chen, Zhenzhong; Han, Junwei; Ngan, King Ngi

    2005-10-01

    MPEG-4 treats a scene as a composition of several objects or so-called video object planes (VOPs) that are separately encoded and decoded. Such a flexible video coding framework makes it possible to code different video object with different distortion scale. It is necessary to analyze the priority of the video objects according to its semantic importance, intrinsic properties and psycho-visual characteristics such that the bit budget can be distributed properly to video objects to improve the perceptual quality of the compressed video. This paper aims to provide an automatic video object priority definition method based on object-level visual attention model and further propose an optimization framework for video object bit allocation. One significant contribution of this work is that the human visual system characteristics are incorporated into the video coding optimization process. Another advantage is that the priority of the video object can be obtained automatically instead of fixing weighting factors before encoding or relying on the user interactivity. To evaluate the performance of the proposed approach, we compare it with traditional verification model bit allocation and the optimal multiple video object bit allocation algorithms. Comparing with traditional bit allocation algorithms, the objective quality of the object with higher priority is significantly improved under this framework. These results demonstrate the usefulness of this unsupervised subjective quality lifting framework.

  17. Heuristic ternary error-correcting output codes via weight optimization and layered clustering-based approach.

    PubMed

    Zhang, Xiao-Lei

    2015-02-01

    One important classifier ensemble for multiclass classification problems is error-correcting output codes (ECOCs). It bridges multiclass problems and binary-class classifiers by decomposing multiclass problems to a serial binary-class problems. In this paper, we present a heuristic ternary code, named weight optimization and layered clustering-based ECOC (WOLC-ECOC). It starts with an arbitrary valid ECOC and iterates the following two steps until the training risk converges. The first step, named layered clustering-based ECOC (LC-ECOC), constructs multiple strong classifiers on the most confusing binary-class problem. The second step adds the new classifiers to ECOC by a novel optimized weighted (OW) decoding algorithm, where the optimization problem of the decoding is solved by the cutting plane algorithm. Technically, LC-ECOC makes the heuristic training process not blocked by some difficult binary-class problem. OW decoding guarantees the nonincrease of the training risk for ensuring a small code length. Results on 14 UCI datasets and a music genre classification problem demonstrate the effectiveness of WOLC-ECOC. PMID:25486660

  18. An application of anti-optimization in the process of validating aerodynamic codes

    NASA Astrophysics Data System (ADS)

    Cruz, Juan R.

    An investigation was conducted to assess the usefulness of anti-optimization in the process of validating of aerodynamic codes. Anti-optimization is defined here as the intentional search for regions where the computational and experimental results disagree. Maximizing such disagreements can be a useful tool in uncovering errors and/or weaknesses in both analyses and experiments. The codes chosen for this investigation were an airfoil code and a lifting line code used together as an analysis to predict three-dimensional wing aerodynamic coefficients. The parameter of interest was the maximum lift coefficient of the three-dimensional wing, CL max. The test domain encompassed Mach numbers from 0.3 to 0.8, and Reynolds numbers from 25,000 to 250,000. A simple rectangular wing was designed for the experiment. A wind tunnel model of this wing was built and tested in the NASA Langley Transonic Dynamics Tunnel. Selection of the test conditions (i.e., Mach and Reynolds numbers) were made by applying the techniques of response surface methodology and considerations involving the predicted experimental uncertainty. The test was planned and executed in two phases. In the first phase runs were conducted at the pre-planned test conditions. Based on these results additional runs were conducted in areas where significant differences in CL max were observed between the computational results and the experiment---in essence applying the concept of anti-optimization. These additional runs were used to verify the differences in CL max and assess the extent of the region where these differences occurred. The results of the experiment showed that the analysis was capable of predicting CL max to within 0.05 over most of the test domain. The application of anti-optimization succeeded in identifying a region where the computational and experimental values of C L max differed by more than 0.05, demonstrating the usefulness of anti-optimization in process of validating aerodynamic codes

  19. Development of free-piston Stirling engine performance and optimization codes based on Martini simulation technique

    NASA Technical Reports Server (NTRS)

    Martini, William R.

    1989-01-01

    A FORTRAN computer code is described that could be used to design and optimize a free-displacer, free-piston Stirling engine similar to the RE-1000 engine made by Sunpower. The code contains options for specifying displacer and power piston motion or for allowing these motions to be calculated by a force balance. The engine load may be a dashpot, inertial compressor, hydraulic pump or linear alternator. Cycle analysis may be done by isothermal analysis or adiabatic analysis. Adiabatic analysis may be done using the Martini moving gas node analysis or the Rios second-order Runge-Kutta analysis. Flow loss and heat loss equations are included. Graphical display of engine motions and pressures and temperatures are included. Programming for optimizing up to 15 independent dimensions is included. Sample performance results are shown for both specified and unconstrained piston motions; these results are shown as generated by each of the two Martini analyses. Two sample optimization searches are shown using specified piston motion isothermal analysis. One is for three adjustable input and one is for four. Also, two optimization searches for calculated piston motion are presented for three and for four adjustable inputs. The effect of leakage is evaluated. Suggestions for further work are given.

  20. On the Efficacy of Source Code Optimizations for Cache-Based Systems

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Saphir, William C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    Obtaining high performance without machine-specific tuning is an important goal of scientific application programmers. Since most scientific processing is done on commodity microprocessors with hierarchical memory systems, this goal of "portable performance" can be achieved if a common set of optimization principles is effective for all such systems. It is widely believed, or at least hoped, that portable performance can be realized. The rule of thumb for optimization on hierarchical memory systems is to maximize temporal and spatial locality of memory references by reusing data and minimizing memory access stride. We investigate the effects of a number of optimizations on the performance of three related kernels taken from a computational fluid dynamics application. Timing the kernels on a range of processors, we observe an inconsistent and often counterintuitive impact of the optimizations on performance. In particular, code variations that have a positive impact on one architecture can have a negative impact on another, and variations expected to be unimportant can produce large effects. Moreover, we find that cache miss rates-as reported by a cache simulation tool, and confirmed by hardware counters-only partially explain the results. By contrast, the compiler-generated assembly code provides more insight by revealing the importance of processor-specific instructions and of compiler maturity, both of which strongly, and sometimes unexpectedly, influence performance. We conclude that it is difficult to obtain performance portability on modern cache-based computers, and comment on the implications of this result.

  1. On the Efficacy of Source Code Optimizations for Cache-Based Systems

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Saphir, William C.

    1998-01-01

    Obtaining high performance without machine-specific tuning is an important goal of scientific application programmers. Since most scientific processing is done on commodity microprocessors with hierarchical memory systems, this goal of "portable performance" can be achieved if a common set of optimization principles is effective for all such systems. It is widely believed, or at least hoped, that portable performance can be realized. The rule of thumb for optimization on hierarchical memory systems is to maximize temporal and spatial locality of memory references by reusing data and minimizing memory access stride. We investigate the effects of a number of optimizations on the performance of three related kernels taken from a computational fluid dynamics application. Timing the kernels on a range of processors, we observe an inconsistent and often counterintuitive impact of the optimizations on performance. In particular, code variations that have a positive impact on one architecture can have a negative impact on another, and variations expected to be unimportant can produce large effects. Moreover, we find that cache miss rates - as reported by a cache simulation tool, and confirmed by hardware counters - only partially explain the results. By contrast, the compiler-generated assembly code provides more insight by revealing the importance of processor-specific instructions and of compiler maturity, both of which strongly, and sometimes unexpectedly, influence performance. We conclude that it is difficult to obtain performance portability on modern cache-based computers, and comment on the implications of this result.

  2. Finite population analysis of the effect of horizontal gene transfer on the origin of an universal and optimal genetic code.

    PubMed

    Aggarwal, Neha; Bandhu, Ashutosh Vishwa; Sengupta, Supratim

    2016-01-01

    The origin of a universal and optimal genetic code remains a compelling mystery in molecular biology and marks an essential step in the origin of DNA and protein based life. We examine a collective evolution model of genetic code origin that allows for unconstrained horizontal transfer of genetic elements within a finite population of sequences each of which is associated with a genetic code selected from a pool of primordial codes. We find that when horizontal transfer of genetic elements is incorporated in this more realistic model of code-sequence coevolution in a finite population, it can increase the likelihood of emergence of a more optimal code eventually leading to its universality through fixation in the population. The establishment of such an optimal code depends on the probability of HGT events. Only when the probability of HGT events is above a critical threshold, we find that the ten amino acid code having a structure that is most consistent with the standard genetic code (SGC) often gets fixed in the population with the highest probability. We examine how the threshold is determined by factors like the population size, length of the sequences and selection coefficient. Our simulation results reveal the conditions under which sharing of coding innovations through horizontal transfer of genetic elements may have facilitated the emergence of a universal code having a structure similar to that of the SGC. PMID:27232957

  3. Finite population analysis of the effect of horizontal gene transfer on the origin of an universal and optimal genetic code

    NASA Astrophysics Data System (ADS)

    Aggarwal, Neha; Vishwa Bandhu, Ashutosh; Sengupta, Supratim

    2016-06-01

    The origin of a universal and optimal genetic code remains a compelling mystery in molecular biology and marks an essential step in the origin of DNA and protein based life. We examine a collective evolution model of genetic code origin that allows for unconstrained horizontal transfer of genetic elements within a finite population of sequences each of which is associated with a genetic code selected from a pool of primordial codes. We find that when horizontal transfer of genetic elements is incorporated in this more realistic model of code-sequence coevolution in a finite population, it can increase the likelihood of emergence of a more optimal code eventually leading to its universality through fixation in the population. The establishment of such an optimal code depends on the probability of HGT events. Only when the probability of HGT events is above a critical threshold, we find that the ten amino acid code having a structure that is most consistent with the standard genetic code (SGC) often gets fixed in the population with the highest probability. We examine how the threshold is determined by factors like the population size, length of the sequences and selection coefficient. Our simulation results reveal the conditions under which sharing of coding innovations through horizontal transfer of genetic elements may have facilitated the emergence of a universal code having a structure similar to that of the SGC.

  4. Improvement of BER performance in MIMO-CDMA systems by using initial-phase optimized gold codes

    NASA Astrophysics Data System (ADS)

    Develi, Ibrahim; Filiz, Meryem

    2013-01-01

    This paper describes a new approach to improve the bit error rate (BER) performance of a multiple-input multiple-output code-division multiple-access (MIMO-CDMA) system over quasi-static Rayleigh fading channels. The system considered employs robust space-time successive interference cancellation detectors and initial-phase optimized Gold codes for the improvement. The results clearly indicate that the use of initial-phase optimized Gold codes can significantly improve the BER performance of the system compared to the performance of a multiuser MIMO-CDMA system with conventional nonoptimized Gold codes. Furthermore, this performance improvement is achieved without any increase in system complexity.

  5. Optimizing the search for high-z GRBs:. the JANUS X-ray coded aperture telescope

    NASA Astrophysics Data System (ADS)

    Burrows, D. N.; Fox, D.; Palmer, D.; Romano, P.; Mangano, V.; La Parola, V.; Falcone, A. D.; Roming, P. W. A.

    We discuss the optimization of gamma-ray burst (GRB) detectors with a goal of maximizing the detected number of bright high-redshift GRBs, in the context of design studies conducted for the X-ray transient detector on the JANUS mission. We conclude that the optimal energy band for detection of high-z GRBs is below about 30 keV. We considered both lobster-eye and coded aperture designs operating in this energy band. Within the available mass and power constraints, we found that the coded aperture mask was preferred for the detection of high-z bursts with bright enough afterglows to probe galaxies in the era of the Cosmic Dawn. This initial conclusion was confirmed through detailed mission simulations that found that the selected design (an X-ray Coded Aperture Telescope) would detect four times as many bright, high-z GRBs as the lobster-eye design we considered. The JANUS XCAT instrument will detect 48 GRBs with z>5 and fluence S_x > 3 × 10-7 erg cm-2 in a two year mission.

  6. The SWAN-SCALE code for the optimization of critical systems

    SciTech Connect

    Greenspan, E.; Karni, Y.; Regev, D.; Petrie, L.M.

    1999-07-01

    The SWAN optimization code was recently developed to identify the maximum value of k{sub eff} for a given mass of fissile material when in combination with other specified materials. The optimization process is iterative; in each iteration SWAN varies the zone-dependent concentration of the system constituents. This change is guided by the equal volume replacement effectiveness functions (EVREF) that SWAN generates using first-order perturbation theory. Previously, SWAN did not have provisions to account for the effect of the composition changes on neutron cross-section resonance self-shielding; it used the cross sections corresponding to the initial system composition. In support of the US Department of Energy Nuclear Criticality Safety Program, the authors recently removed the limitation on resonance self-shielding by coupling SWAN with the SCALE code package. The purpose of this paper is to briefly describe the resulting SWAN-SCALE code and to illustrate the effect that neutron cross-section self-shielding could have on the maximum k{sub eff} and on the corresponding system composition.

  7. An optimal unequal error protection scheme with turbo product codes for wavelet compression of ultraspectral sounder data

    NASA Astrophysics Data System (ADS)

    Huang, Bormin; Sriraja, Y.; Ahuja, Alok; Goldberg, Mitchell D.

    2006-08-01

    Most source coding techniques generate bitstream where different regions have unequal influences on data reconstruction. An uncorrected error in a more influential region can cause more error propagation in the reconstructed data. Given a limited bandwidth, unequal error protection (UEP) via channel coding with different code rates for different regions of bitstream may yield much less error contamination than equal error protection (EEP). We propose an optimal UEP scheme that minimizes error contamination after channel and source decoding. We use JPEG2000 for source coding and turbo product code (TPC) for channel coding as an example to demonstrate this technique with ultraspectral sounder data. Wavelet compression yields unequal significance in different wavelet resolutions. In the proposed UEP scheme, the statistics of erroneous pixels after TPC and JPEG2000 decoding are used to determine the optimal channel code rates for each wavelet resolution. The proposed UEP scheme significantly reduces the number of pixel errors when compared to its EEP counterpart. In practice, with a predefined set of implementation parameters (available channel codes, desired code rate, noise level, etc.), the optimal code rate allocation for UEP needs to be determined only once and can be done offline.

  8. Optimizing performance of superscalar codes for a single Cray X1MSP processor

    SciTech Connect

    Shan, Hongzhang; Strohmaier, Erich; Oliker, Leonid

    2004-06-08

    The growing gap between sustained and peak performance for full-scale complex scientific applications on conventional supercomputers is a major concern in high performance computing. The recently-released vector-based Cray X1 offers to bridge this gap for many demanding scientific applications. However, this unique architecture contains both data caches and multi-streaming processing units, and the optimal programming methodology is still under investigation. In this paper we investigate Cray X1 code optimization for a suite of computational kernels originally designed for superscalar processors. For our study, we select four applications from the SPLASH2 application suite (1-D FFT,Radix, Ocean, and Nbody), two kernels from the NAS benchmark suite (3-DFFT and CG), and a matrix-matrix multiplication kernel. Results show that for many cases, the addition of vectorization compiler directives results faster runtimes. However, to achieve a significant performance improvement via increased vector length, it is often necessary to restructure the program at the source level sometimes leading to algorithmic level transformations. Additionally, memory bank conflicts may result in substantial performance losses. These conflicts can often be exacerbated when optimizing code for increased vector lengths, and must be explicitly minimized. Finally, we investigate the relationship of the X1 data caches on overall performance.

  9. Operationally optimal vertex-based shape coding with arbitrary direction edge encoding structures

    NASA Astrophysics Data System (ADS)

    Lai, Zhongyuan; Zhu, Junhuan; Luo, Jiebo

    2014-07-01

    The intention of shape coding in the MPEG-4 is to improve the coding efficiency as well as to facilitate the object-oriented applications, such as shape-based object recognition and retrieval. These require both efficient shape compression and effective shape description. Although these two issues have been intensively investigated in data compression and pattern recognition fields separately, it remains an open problem when both objectives need to be considered together. To achieve high coding gain, the operational rate-distortion optimal framework can be applied, but the direction restriction of the traditional eight-direction edge encoding structure reduces its compression efficiency and description effectiveness. We present two arbitrary direction edge encoding structures to relax this direction restriction. They consist of a sector number, a short component, and a long component, which represent both the direction and the magnitude information of an encoding edge. Experiments on both shape coding and hand gesture recognition validate that our structures can reduce a large number of encoding vertices and save up to 48.9% bits. Besides, the object contours are effectively described and suitable for the object-oriented applications.

  10. Three-dimensional polarization marked multiple-QR code encryption by optimizing a single vectorial beam

    NASA Astrophysics Data System (ADS)

    Lin, Chao; Shen, Xueju; Hua, Binbin; Wang, Zhisong

    2015-10-01

    We demonstrate the feasibility of three dimensional (3D) polarization multiplexing by optimizing a single vectorial beam using a multiple-signal window multiple-plane (MSW-MP) phase retrieval algorithm. Original messages represented with multiple quick response (QR) codes are first partitioned into a series of subblocks. Then, each subblock is marked with a specific polarization state and randomly distributed in 3D space with both longitudinal and transversal adjustable freedoms. A generalized 3D polarization mapping protocol is established to generate a 3D polarization key. Finally, multiple-QR code is encrypted into one phase only mask and one polarization only mask based on the modified Gerchberg-Saxton (GS) algorithm. We take the polarization mask as the cyphertext and the phase only mask as additional dimension of key. Only when both the phase key and 3D polarization key are correct, original messages can be recovered. We verify our proposal with both simulation and experiment evidences.

  11. Performance and optimization of direct implicit time integration schemes for use in electrostatic particle simulation codes

    SciTech Connect

    Procassini, R.J.; Birdsall, C.K.; Morse, E.C.; Cohen, B.I.

    1988-01-01

    Implicit time integration schemes allow for the use of larger time steps than conventional explicit methods, thereby extending the applicability of kinetic particle simulation methods. This paper will describe a study of the performance and optimization of two such direct implicit schemes, which are used to follow the trajectories of charged particles in an electrostatic, particle-in-cell plasma simulation code. The direct implicit method that was used for this study is an alternative to the moment-equation implicit method. 10 refs., 7 figs., 4 tabs.

  12. Code Optimization and Parallelization on the Origins: Looking from Users' Perspective

    NASA Technical Reports Server (NTRS)

    Chang, Yan-Tyng Sherry; Thigpen, William W. (Technical Monitor)

    2002-01-01

    Parallel machines are becoming the main compute engines for high performance computing. Despite their increasing popularity, it is still a challenge for most users to learn the basic techniques to optimize/parallelize their codes on such platforms. In this paper, we present some experiences on learning these techniques for the Origin systems at the NASA Advanced Supercomputing Division. Emphasis of this paper will be on a few essential issues (with examples) that general users should master when they work with the Origins as well as other parallel systems.

  13. An Integer-Coded Chaotic Particle Swarm Optimization for Traveling Salesman Problem

    NASA Astrophysics Data System (ADS)

    Yue, Chen; Yan-Duo, Zhang; Jing, Lu; Hui, Tian

    Traveling Salesman Problem (TSP) is one of NP-hard combinatorial optimization problems, which will experience “combination explosion” when the problem goes beyond a certain size. Therefore, it has been a hot topic to search an effective solving method. The general mathematical model of TSP is discussed, and its permutation and combination based model is presented. Based on these, Integer-coded Chaotic Particle Swarm Optimization for solving TSP is proposed. Where, particle is encoded with integer; chaotic sequence is used to guide global search; and particle varies its positions via “flying”. With a typical 20-citys TSP as instance, the simulation experiment of comparing ICPSO with GA is carried out. Experimental results demonstrate that ICPSO is simple but effective, and better than GA at performance.

  14. Optimization and Openmp Parallelization of a Discrete Element Code for Convex Polyhedra on Multi-Core Machines

    NASA Astrophysics Data System (ADS)

    Chen, Jian; Matuttis, Hans-Georg

    2013-02-01

    We report our experiences with the optimization and parallelization of a discrete element code for convex polyhedra on multi-core machines and introduce a novel variant of the sort-and-sweep neighborhood algorithm. While in theory the whole code in itself parallelizes ideally, in practice the results on different architectures with different compilers and performance measurement tools depend very much on the particle number and optimization of the code. After difficulties with the interpretation of the data for speedup and efficiency are overcome, respectable parallelization speedups could be obtained.

  15. Optimized conical shaped charge design using the SCAP (Shaped Charge Analysis Program) code

    SciTech Connect

    Vigil, M.G.

    1988-09-01

    The Shaped Charge Analysis Program (SCAP) is used to analytically model and optimize the design of Conical Shaped Charges (CSC). A variety of existing CSCs are initially modeled with the SCAP code and the predicted jet tip velocities, jet penetrations, and optimum standoffs are compared to previously published experimental results. The CSCs vary in size from 0.69 inch (1.75 cm) to 9.125 inch (23.18 cm) conical liner inside diameter. Two liner materials (copper and steel) and several explosives (Octol, Comp B, PBX-9501) are included in the CSCs modeled. The target material was mild steel. A parametric study was conducted using the SCAP code to obtain the optimum design for a 3.86 inch (9.8 cm) CSC. The variables optimized in this study included the CSC apex angle, conical liner thickness, explosive height, optimum standoff, tamper/confinement thickness, and explosive width. The non-dimensionalized jet penetration to diameter ratio versus the above parameters are graphically presented. 12 refs., 10 figs., 7 tabs.

  16. Optimization of wavefront-coded infinity-corrected microscope systems with extended depth of field

    PubMed Central

    Zhao, Tingyu; Mauger, Thomas; Li, Guoqiang

    2013-01-01

    The depth of field of an infinity-corrected microscope system is greatly extended by simply applying a specially designed phase mask between the objective and the tube lens. In comparison with the method of modifying the structure of objective, it is more cost effective and provides improved flexibility for assembling the system. Instead of using an ideal optical system for simulation which was the focus of the previous research, a practical wavefront-coded infinity-corrected microscope system is designed in this paper by considering the various aberrations. Two new optimization methods, based on the commercial optical design software, are proposed to design a wavefront-coded microscope using a non-symmetric phase mask and a symmetric phase mask, respectively. We use polynomial phase mask and rational phase mask as examples of the non-symmetric and symmetric phase masks respectively. Simulation results show that both optimization methods work well for a 32 × infinity-corrected microscope system with 0.6 numerical aperture. The depth of field is extended to about 13 times of the traditional one. PMID:24010008

  17. Acceleration of the Geostatistical Software Library (GSLIB) by code optimization and hybrid parallel programming

    NASA Astrophysics Data System (ADS)

    Peredo, Oscar; Ortiz, Julián M.; Herrero, José R.

    2015-12-01

    The Geostatistical Software Library (GSLIB) has been used in the geostatistical community for more than thirty years. It was designed as a bundle of sequential Fortran codes, and today it is still in use by many practitioners and researchers. Despite its widespread use, few attempts have been reported in order to bring this package to the multi-core era. Using all CPU resources, GSLIB algorithms can handle large datasets and grids, where tasks are compute- and memory-intensive applications. In this work, a methodology is presented to accelerate GSLIB applications using code optimization and hybrid parallel processing, specifically for compute-intensive applications. Minimal code modifications are added decreasing as much as possible the elapsed time of execution of the studied routines. If multi-core processing is available, the user can activate OpenMP directives to speed up the execution using all resources of the CPU. If multi-node processing is available, the execution is enhanced using MPI messages between the compute nodes.Four case studies are presented: experimental variogram calculation, kriging estimation, sequential gaussian and indicator simulation. For each application, three scenarios (small, large and extra large) are tested using a desktop environment with 4 CPU-cores and a multi-node server with 128 CPU-nodes. Elapsed times, speedup and efficiency results are shown.

  18. Motion estimation optimization tools for the emerging high efficiency video coding (HEVC)

    NASA Astrophysics Data System (ADS)

    Abdelazim, Abdelrahman; Masri, Wassim; Noaman, Bassam

    2014-02-01

    Recent development in hardware and software allowed a new generation of video quality. However, the development in networking and digital communication is lagging behind. This prompted the establishment of the Joint Collaborative Team on Video Coding (JCT-VC), with an objective to develop a new high-performance video coding standard. A primary reason for developing the HEVC was to enable efficient processing and transmission for HD videos that normally contain large smooth areas; therefore, the HEVC utilizes larger encoding blocks than the previous standard to enable more effective encoding, while smaller blocks are still exploited to encode fast/complex areas of video more efficiently. Hence, the implementation of the encoder investigates all the possible block sizes. This and many added features on the new standard have led to significant increase in the complexity of the encoding process. Furthermore, there is not an automated process to decide on when large blocks or small blocks should be exploited. To overcome this problem, this research proposes a set of optimization tools to reduce the encoding complexity while maintaining the same quality and compression rate. The method automates this process through a set of hierarchical steps yet using the standard refined coding tools.

  19. Optimal analysis of ultra broadband energy-time entanglement for high bit-rate dense wavelength division multiplexed quantum networks

    NASA Astrophysics Data System (ADS)

    Kaiser, F.; Aktas, D.; Fedrici, B.; Lunghi, T.; Labonté, L.; Tanzilli, S.

    2016-06-01

    We demonstrate an experimental method for measuring energy-time entanglement over almost 80 nm spectral bandwidth in a single shot with a quantum bit error rate below 0.5%. Our scheme is extremely cost-effective and efficient in terms of resources as it employs only one source of entangled photons and one fixed unbalanced interferometer per phase-coded analysis basis. We show that the maximum analysis spectral bandwidth is obtained when the analysis interferometers are properly unbalanced, a strategy which can be straightforwardly applied to most of today's experiments based on energy-time and time-bin entanglement. Our scheme has therefore a great potential for boosting bit rates and reducing the resource overhead of future entanglement-based quantum key distribution systems.

  20. Bio-optimized energy transfer in densely packed fluorescent protein enables near-maximal luminescence and solid-state lasers

    NASA Astrophysics Data System (ADS)

    Gather, Malte C.; Yun, Seok Hyun

    2014-12-01

    Bioluminescent organisms are likely to have an evolutionary drive towards high radiance. As such, bio-optimized materials derived from them hold great promise for photonic applications. Here, we show that biologically produced fluorescent proteins retain their high brightness even at the maximum density in solid state through a special molecular structure that provides optimal balance between high protein concentration and low resonance energy transfer self-quenching. Dried films of green fluorescent protein show low fluorescence quenching (-7 dB) and support strong optical amplification (gnet=22 cm-1 96 dB cm-1). Using these properties, we demonstrate vertical cavity surface emitting micro-lasers with low threshold (<100 pJ, outperforming organic semiconductor lasers) and self-assembled all-protein ring lasers. Moreover, solid-state blends of different proteins support efficient Förster resonance energy transfer, with sensitivity to intermolecular distance thus allowing all-optical sensing. The design of fluorescent proteins may be exploited for bio-inspired solid-state luminescent molecules or nanoparticles.

  1. Detection optimization using linear systems analysis of a coded aperture laser sensor system

    SciTech Connect

    Gentry, S.M.

    1994-09-01

    Minimum detectable irradiance levels for a diffraction grating based laser sensor were calculated to be governed by clutter noise resulting from reflected earth albedo. Features on the earth surface caused pseudo-imaging effects on the sensor`s detector arras that resulted in the limiting noise in the detection domain. It was theorized that a custom aperture transmission function existed that would optimize the detection of laser sources against this clutter background. Amplitude and phase aperture functions were investigated. Compared to the diffraction grating technique, a classical Young`s double-slit aperture technique was investigated as a possible optimized solution but was not shown to produce a system that had better clutter-noise limited minimum detectable irradiance. Even though the double-slit concept was not found to have a detection advantage over the slit-grating concept, one interesting concept grew out of the double-slit design that deserved mention in this report, namely the Barker-coded double-slit. This diffractive aperture design possessed properties that significantly improved the wavelength accuracy of the double-slit design. While a concept was not found to beat the slit-grating concept, the methodology used for the analysis and optimization is an example of the application of optoelectronic system-level linear analysis. The techniques outlined here can be used as a template for analysis of a wide range of optoelectronic systems where the entire system, both optical and electronic, contribute to the detection of complex spatial and temporal signals.

  2. Optimization and implementation of the integer wavelet transform for image coding.

    PubMed

    Grangetto, Marco; Magli, Enrico; Martina, Maurizio; Olmo, Gabriella

    2002-01-01

    This paper deals with the design and implementation of an image transform coding algorithm based on the integer wavelet transform (IWT). First of all, criteria are proposed for the selection of optimal factorizations of the wavelet filter polyphase matrix to be employed within the lifting scheme. The obtained results lead to the IWT implementations with very satisfactory lossless and lossy compression performance. Then, the effects of finite precision representation of the lifting coefficients on the compression performance are analyzed, showing that, in most cases, a very small number of bits can be employed for the mantissa keeping the performance degradation very limited. Stemming from these results, a VLSI architecture is proposed for the IWT implementation, capable of achieving very high frame rates with moderate gate complexity. PMID:18244658

  3. Real Time Optimizing Code for Stabilization and Control of Plasma Reactors

    Energy Science and Technology Software Center (ESTSC)

    1995-09-25

    LOOP4 is a flexible real-time control code that acquires signals (input variables) from an array of sensors, that computes therefrom the actual state of the reactor system, that compares the actual state to the desired state (a goal), and that commands changes to reactor controls (output, or manipulated variables) in order to minimize the difference between the actual state of the reactor and the desired state. The difference between actual and desired states is quantifiedmore » in terms of a distance metric in the space defined by the sensor measurements. The desired state of the reactor is specified in terms of target values of sensor readings that were obtained previously during development and optimization of a process engineer using conventional techniques.« less

  4. Optimization of Parallel Legendre Transform using Graphics Processing Unit (GPU) for a Geodynamo Code

    NASA Astrophysics Data System (ADS)

    Lokavarapu, H. V.; Matsui, H.

    2015-12-01

    Convection and magnetic field of the Earth's outer core are expected to have vast length scales. To resolve these flows, high performance computing is required for geodynamo simulations using spherical harmonics transform (SHT), a significant portion of the execution time is spent on the Legendre transform. Calypso is a geodynamo code designed to model magnetohydrodynamics of a Boussinesq fluid in a rotating spherical shell, such as the outer core of the Earth. The code has been shown to scale well on computer clusters capable of computing at the order of 10⁵ cores using Message Passing Interface (MPI) and Open Multi-Processing (OpenMP) parallelization for CPUs. To further optimize, we investigate three different algorithms of the SHT using GPUs. One is to preemptively compute the Legendre polynomials on the CPU before executing SHT on the GPU within the time integration loop. In the second approach, both the Legendre polynomials and the SHT are computed on the GPU simultaneously. In the third approach , we initially partition the radial grid for the forward transform and the harmonic order for the backward transform between the CPU and GPU. There after, the partitioned works are simultaneously computed in the time integration loop. We examine the trade-offs between space and time, memory bandwidth and GPU computations on Maverick, a Texas Advanced Computing Center (TACC) supercomputer. We have observed improved performance using a GPU enabled Legendre transform. Furthermore, we will compare and contrast the different algorithms in the context of GPUs.

  5. Analytical computation of the derivative of PSF for the optimization of phase mask in wavefront coding system.

    PubMed

    Chen, Xinhua; Zhou, Jiankang; Shen, Weimin

    2016-09-01

    Wavefront coding system can realize defocus invariance of PSF/OTF with a phase mask inserting in the pupil plane. Ideally, the derivative of the PSF/OTF with respect to defocus error should be close to zero as much as possible over the extended depth of field/focus for the wavefront coding system. In this paper, we propose an analytical expression for the computation of the derivative of PSF. With this expression, the derivative of PSF based merit function can be used in the optimization of the wavefront coding system with any type of phase mask and aberrations. Computation of the derivative of PSF using the proposed expression and FFT respectively are compared and discussed. We also demonstrate the optimization of a generic polynomial phase mask in wavefront coding system as an example. PMID:27607710

  6. The DOPEX code: An application of the method of steepest descent to laminated-shield-weight optimization with several constraints

    NASA Technical Reports Server (NTRS)

    Lahti, G. P.

    1972-01-01

    A two- or three-constraint, two-dimensional radiation shield weight optimization procedure and a computer program, DOPEX, is described. The DOPEX code uses the steepest descent method to alter a set of initial (input) thicknesses for a shield configuration to achieve a minimum weight while simultaneously satisfying dose constaints. The code assumes an exponential dose-shield thickness relation with parameters specified by the user. The code also assumes that dose rates in each principal direction are dependent only on thicknesses in that direction. Code input instructions, FORTRAN 4 listing, and a sample problem are given. Typical computer time required to optimize a seven-layer shield is about 0.1 minute on an IBM 7094-2.

  7. Variational-average-atom-in-quantum-plasmas (VAAQP) code and virial theorem: Equation-of-state and shock-Hugoniot calculations for warm dense Al, Fe, Cu, and Pb

    SciTech Connect

    Piron, R.; Blenski, T.

    2011-02-15

    The numerical code VAAQP (variational average atom in quantum plasmas), which is based on a fully variational model of equilibrium dense plasmas, is applied to equation-of-state calculations for aluminum, iron, copper, and lead in the warm-dense-matter regime. VAAQP does not impose the neutrality of the Wigner-Seitz ion sphere; it provides the average-atom structure and the mean ionization self-consistently from the solution of the variational equations. The formula used for the electronic pressure is simple and does not require any numerical differentiation. In this paper, the virial theorem is derived in both nonrelativistic and relativistic versions of the model. This theorem allows one to express the electron pressure as a combination of the electron kinetic and interaction energies. It is shown that the model fulfills automatically the virial theorem in the case of local-density approximations to the exchange-correlation free-energy. Applications of the model to the equation-of-state and Hugoniot shock adiabat of aluminum, iron, copper, and lead in the warm-dense-matter regime are presented. Comparisons with other approaches, including the inferno model, and with available experimental data are given. This work allows one to understand the thermodynamic consistency issues in the existing average-atom models. Starting from the case of aluminum, a comparative study of the thermodynamic consistency of the models is proposed. A preliminary study of the validity domain of the inferno model is also included.

  8. Variational-average-atom-in-quantum-plasmas (VAAQP) code and virial theorem: equation-of-state and shock-Hugoniot calculations for warm dense Al, Fe, Cu, and Pb.

    PubMed

    Piron, R; Blenski, T

    2011-02-01

    The numerical code VAAQP (variational average atom in quantum plasmas), which is based on a fully variational model of equilibrium dense plasmas, is applied to equation-of-state calculations for aluminum, iron, copper, and lead in the warm-dense-matter regime. VAAQP does not impose the neutrality of the Wigner-Seitz ion sphere; it provides the average-atom structure and the mean ionization self-consistently from the solution of the variational equations. The formula used for the electronic pressure is simple and does not require any numerical differentiation. In this paper, the virial theorem is derived in both nonrelativistic and relativistic versions of the model. This theorem allows one to express the electron pressure as a combination of the electron kinetic and interaction energies. It is shown that the model fulfills automatically the virial theorem in the case of local-density approximations to the exchange-correlation free-energy. Applications of the model to the equation-of-state and Hugoniot shock adiabat of aluminum, iron, copper, and lead in the warm-dense-matter regime are presented. Comparisons with other approaches, including the inferno model, and with available experimental data are given. This work allows one to understand the thermodynamic consistency issues in the existing average-atom models. Starting from the case of aluminum, a comparative study of the thermodynamic consistency of the models is proposed. A preliminary study of the validity domain of the inferno model is also included. PMID:21405914

  9. Symmetry-based coding method and synthesis topology optimization design of ultra-wideband polarization conversion metasurfaces

    NASA Astrophysics Data System (ADS)

    Sui, Sai; Ma, Hua; Wang, Jiafu; Feng, Mingde; Pang, Yongqiang; Xia, Song; Xu, Zhuo; Qu, Shaobo

    2016-07-01

    In this letter, we propose the synthesis topology optimization method of designing ultra-wideband polarization conversion metasurface for linearly polarized waves. The general design principle of polarization conversion metasurfaces is derived theoretically. Symmetry-based coding, with shorter coding length and better optimization efficiency, is then proposed. As an example, a topological metasurface is demonstrated with an ultra-wideband polarization conversion property. The results of both simulations and experiments show that the metasurface can convert linearly polarized waves into cross-polarized waves in 8.0-30.0 GHz, obtaining the property of ultra-wideband polarization conversion based on metasurfaces, and hence validating the synthesis design method. The proposed method combines the merits of topology optimization and symmetry-based coding method, which provides an efficient tool for the design of high-performance polarization conversion metasurfaces.

  10. Stereoscopic Visual Attention-Based Regional Bit Allocation Optimization for Multiview Video Coding

    NASA Astrophysics Data System (ADS)

    Zhang, Yun; Jiang, Gangyi; Yu, Mei; Chen, Ken; Dai, Qionghai

    2010-12-01

    We propose a Stereoscopic Visual Attention- (SVA-) based regional bit allocation optimization for Multiview Video Coding (MVC) by the exploiting visual redundancies from human perceptions. We propose a novel SVA model, where multiple perceptual stimuli including depth, motion, intensity, color, and orientation contrast are utilized, to simulate the visual attention mechanisms of human visual system with stereoscopic perception. Then, a semantic region-of-interest (ROI) is extracted based on the saliency maps of SVA. Both objective and subjective evaluations of extracted ROIs indicated that the proposed SVA model based on ROI extraction scheme outperforms the schemes only using spatial or/and temporal visual attention clues. Finally, by using the extracted SVA-based ROIs, a regional bit allocation optimization scheme is presented to allocate more bits on SVA-based ROIs for high image quality and fewer bits on background regions for efficient compression purpose. Experimental results on MVC show that the proposed regional bit allocation algorithm can achieve over [InlineEquation not available: see fulltext.]% bit-rate saving while maintaining the subjective image quality. Meanwhile, the image quality of ROIs is improved by [InlineEquation not available: see fulltext.] dB at the cost of insensitive image quality degradation of the background image.

  11. An Optimal Pull-Push Scheduling Algorithm Based on Network Coding for Mesh Peer-to-Peer Live Streaming

    NASA Astrophysics Data System (ADS)

    Cui, Laizhong; Jiang, Yong; Wu, Jianping; Xia, Shutao

    Most large-scale Peer-to-Peer (P2P) live streaming systems are constructed as a mesh structure, which can provide robustness in the dynamic P2P environment. The pull scheduling algorithm is widely used in this mesh structure, which degrades the performance of the entire system. Recently, network coding was introduced in mesh P2P streaming systems to improve the performance, which makes the push strategy feasible. One of the most famous scheduling algorithms based on network coding is R2, with a random push strategy. Although R2 has achieved some success, the push scheduling strategy still lacks a theoretical model and optimal solution. In this paper, we propose a novel optimal pull-push scheduling algorithm based on network coding, which consists of two stages: the initial pull stage and the push stage. The main contributions of this paper are: 1) we put forward a theoretical analysis model that considers the scarcity and timeliness of segments; 2) we formulate the push scheduling problem to be a global optimization problem and decompose it into local optimization problems on individual peers; 3) we introduce some rules to transform the local optimization problem into a classical min-cost optimization problem for solving it; 4) We combine the pull strategy with the push strategy and systematically realize our scheduling algorithm. Simulation results demonstrate that decode delay, decode ratio and redundant fraction of the P2P streaming system with our algorithm can be significantly improved, without losing throughput and increasing overhead.

  12. Neural network river forecasting through baseflow separation and binary-coded swarm optimization

    NASA Astrophysics Data System (ADS)

    Taormina, Riccardo; Chau, Kwok-Wing; Sivakumar, Bellie

    2015-10-01

    The inclusion of expert knowledge in data-driven streamflow modeling is expected to yield more accurate estimates of river quantities. Modular models (MMs) designed to work on different parts of the hydrograph are preferred ways to implement such approach. Previous studies have suggested that better predictions of total streamflow could be obtained via modular Artificial Neural Networks (ANNs) trained to perform an implicit baseflow separation. These MMs fit separately the baseflow and excess flow components as produced by a digital filter, and reconstruct the total flow by adding these two signals at the output. The optimization of the filter parameters and ANN architectures is carried out through global search techniques. Despite the favorable premises, the real effectiveness of such MMs has been tested only on a few case studies, and the quality of the baseflow separation they perform has never been thoroughly assessed. In this work, we compare the performance of MM against global models (GMs) for nine different gaging stations in the northern United States. Binary-coded swarm optimization is employed for the identification of filter parameters and model structure, while Extreme Learning Machines, instead of ANN, are used to drastically reduce the large computational times required to perform the experiments. The results show that there is no evidence that MM outperform global GM for predicting the total flow. In addition, the baseflow produced by the MM largely underestimates the actual baseflow component expected for most of the considered gages. This occurs because the values of the filter parameters maximizing overall accuracy do not reflect the geological characteristics of the river basins. The results indeed show that setting the filter parameters according to expert knowledge results in accurate baseflow separation but lower accuracy of total flow predictions, suggesting that these two objectives are intrinsically conflicting rather than compatible.

  13. Dose optimization with first-order total-variation minimization for dense angularly sampled and sparse intensity modulated radiation therapy (DASSIM-RT)

    SciTech Connect

    Kim, Hojin; Li Ruijiang; Lee, Rena; Goldstein, Thomas; Boyd, Stephen; Candes, Emmanuel; Xing Lei

    2012-07-15

    Purpose: A new treatment scheme coined as dense angularly sampled and sparse intensity modulated radiation therapy (DASSIM-RT) has recently been proposed to bridge the gap between IMRT and VMAT. By increasing the angular sampling of radiation beams while eliminating dispensable segments of the incident fields, DASSIM-RT is capable of providing improved conformity in dose distributions while maintaining high delivery efficiency. The fact that DASSIM-RT utilizes a large number of incident beams represents a major computational challenge for the clinical applications of this powerful treatment scheme. The purpose of this work is to provide a practical solution to the DASSIM-RT inverse planning problem. Methods: The inverse planning problem is formulated as a fluence-map optimization problem with total-variation (TV) minimization. A newly released L1-solver, template for first-order conic solver (TFOCS), was adopted in this work. TFOCS achieves faster convergence with less memory usage as compared with conventional quadratic programming (QP) for the TV form through the effective use of conic forms, dual-variable updates, and optimal first-order approaches. As such, it is tailored to specifically address the computational challenges of large-scale optimization in DASSIM-RT inverse planning. Two clinical cases (a prostate and a head and neck case) are used to evaluate the effectiveness and efficiency of the proposed planning technique. DASSIM-RT plans with 15 and 30 beams are compared with conventional IMRT plans with 7 beams in terms of plan quality and delivery efficiency, which are quantified by conformation number (CN), the total number of segments and modulation index, respectively. For optimization efficiency, the QP-based approach was compared with the proposed algorithm for the DASSIM-RT plans with 15 beams for both cases. Results: Plan quality improves with an increasing number of incident beams, while the total number of segments is maintained to be about the

  14. Code to Optimize Load Sharing of Split-Torque Transmissions Applied to the Comanche Helicopter

    NASA Technical Reports Server (NTRS)

    1995-01-01

    Most helicopters now in service have a transmission with a planetary design. Studies have shown that some helicopters would be lighter and more reliable if they had a transmission with a split-torque design instead. However, a split-torque design has never been used by a U.S. helicopter manufacturer because there has been no proven method to ensure equal sharing of the load among the multiple load paths. The Sikorsky/Boeing team has chosen to use a split-torque transmission for the U.S. Army's Comanche helicopter, and Sikorsky Aircraft is designing and manufacturing the transmission. To help reduce the technical risk of fielding this helicopter, NASA and the Army have done the research jointly in cooperation with Sikorsky Aircraft. A theory was developed that equal load sharing could be achieved by proper configuration of the geartrain, and a computer code was completed in-house at the NASA Lewis Research Center to calculate this optimal configuration.

  15. SU-E-T-254: Optimization of GATE and PHITS Monte Carlo Code Parameters for Uniform Scanning Proton Beam Based On Simulation with FLUKA General-Purpose Code

    SciTech Connect

    Kurosu, K; Takashina, M; Koizumi, M; Das, I; Moskvin, V

    2014-06-01

    Purpose: Monte Carlo codes are becoming important tools for proton beam dosimetry. However, the relationships between the customizing parameters and percentage depth dose (PDD) of GATE and PHITS codes have not been reported which are studied for PDD and proton range compared to the FLUKA code and the experimental data. Methods: The beam delivery system of the Indiana University Health Proton Therapy Center was modeled for the uniform scanning beam in FLUKA and transferred identically into GATE and PHITS. This computational model was built from the blue print and validated with the commissioning data. Three parameters evaluated are the maximum step size, cut off energy and physical and transport model. The dependence of the PDDs on the customizing parameters was compared with the published results of previous studies. Results: The optimal parameters for the simulation of the whole beam delivery system were defined by referring to the calculation results obtained with each parameter. Although the PDDs from FLUKA and the experimental data show a good agreement, those of GATE and PHITS obtained with our optimal parameters show a minor discrepancy. The measured proton range R90 was 269.37 mm, compared to the calculated range of 269.63 mm, 268.96 mm, and 270.85 mm with FLUKA, GATE and PHITS, respectively. Conclusion: We evaluated the dependence of the results for PDDs obtained with GATE and PHITS Monte Carlo generalpurpose codes on the customizing parameters by using the whole computational model of the treatment nozzle. The optimal parameters for the simulation were then defined by referring to the calculation results. The physical model, particle transport mechanics and the different geometrybased descriptions need accurate customization in three simulation codes to agree with experimental data for artifact-free Monte Carlo simulation. This study was supported by Grants-in Aid for Cancer Research (H22-3rd Term Cancer Control-General-043) from the Ministry of Health

  16. The Neural Code for Auditory Space Depends on Sound Frequency and Head Size in an Optimal Manner

    PubMed Central

    Harper, Nicol S.; Scott, Brian H.; Semple, Malcolm N.; McAlpine, David

    2014-01-01

    A major cue to the location of a sound source is the interaural time difference (ITD)–the difference in sound arrival time at the two ears. The neural representation of this auditory cue is unresolved. The classic model of ITD coding, dominant for a half-century, posits that the distribution of best ITDs (the ITD evoking a neuron’s maximal response) is unimodal and largely within the range of ITDs permitted by head-size. This is often interpreted as a place code for source location. An alternative model, based on neurophysiology in small mammals, posits a bimodal distribution of best ITDs with exquisite sensitivity to ITDs generated by means of relative firing rates between the distributions. Recently, an optimal-coding model was proposed, unifying the disparate features of these two models under the framework of efficient coding by neural populations. The optimal-coding model predicts that distributions of best ITDs depend on head size and sound frequency: for high frequencies and large heads it resembles the classic model, for low frequencies and small head sizes it resembles the bimodal model. The optimal-coding model makes key, yet unobserved, predictions: for many species, including humans, both forms of neural representation are employed, depending on sound frequency. Furthermore, novel representations are predicted for intermediate frequencies. Here, we examine these predictions in neurophysiological data from five mammalian species: macaque, guinea pig, cat, gerbil and kangaroo rat. We present the first evidence supporting these untested predictions, and demonstrate that different representations appear to be employed at different sound frequencies in the same species. PMID:25372405

  17. The neural code for auditory space depends on sound frequency and head size in an optimal manner.

    PubMed

    Harper, Nicol S; Scott, Brian H; Semple, Malcolm N; McAlpine, David

    2014-01-01

    A major cue to the location of a sound source is the interaural time difference (ITD)-the difference in sound arrival time at the two ears. The neural representation of this auditory cue is unresolved. The classic model of ITD coding, dominant for a half-century, posits that the distribution of best ITDs (the ITD evoking a neuron's maximal response) is unimodal and largely within the range of ITDs permitted by head-size. This is often interpreted as a place code for source location. An alternative model, based on neurophysiology in small mammals, posits a bimodal distribution of best ITDs with exquisite sensitivity to ITDs generated by means of relative firing rates between the distributions. Recently, an optimal-coding model was proposed, unifying the disparate features of these two models under the framework of efficient coding by neural populations. The optimal-coding model predicts that distributions of best ITDs depend on head size and sound frequency: for high frequencies and large heads it resembles the classic model, for low frequencies and small head sizes it resembles the bimodal model. The optimal-coding model makes key, yet unobserved, predictions: for many species, including humans, both forms of neural representation are employed, depending on sound frequency. Furthermore, novel representations are predicted for intermediate frequencies. Here, we examine these predictions in neurophysiological data from five mammalian species: macaque, guinea pig, cat, gerbil and kangaroo rat. We present the first evidence supporting these untested predictions, and demonstrate that different representations appear to be employed at different sound frequencies in the same species. PMID:25372405

  18. Insertion of operation-and-indicate instructions for optimized SIMD code

    DOEpatents

    Eichenberger, Alexander E; Gara, Alan; Gschwind, Michael K

    2013-06-04

    Mechanisms are provided for inserting indicated instructions for tracking and indicating exceptions in the execution of vectorized code. A portion of first code is received for compilation. The portion of first code is analyzed to identify non-speculative instructions performing designated non-speculative operations in the first code that are candidates for replacement by replacement operation-and-indicate instructions that perform the designated non-speculative operations and further perform an indication operation for indicating any exception conditions corresponding to special exception values present in vector register inputs to the replacement operation-and-indicate instructions. The replacement is performed and second code is generated based on the replacement of the at least one non-speculative instruction. The data processing system executing the compiled code is configured to store special exception values in vector output registers, in response to a speculative instruction generating an exception condition, without initiating exception handling.

  19. Experiences in the Performance Analysis and Optimization of a Deterministic Radiation Transport Code on the Cray SV1

    SciTech Connect

    Peter Cebull

    2004-05-01

    The Attila radiation transport code, which solves the Boltzmann neutron transport equation on three-dimensional unstructured tetrahedral meshes, was ported to a Cray SV1. Cray's performance analysis tools pointed to two subroutines that together accounted for 80%-90% of the total CPU time. Source code modifications were performed to enable vectorization of the most significant loops, to correct unfavorable strides through memory, and to replace a conjugate gradient solver subroutine with a call to the Cray Scientific Library. These optimizations resulted in a speedup of 7.79 for the INEEL's largest ATR model. Parallel scalability of the OpenMP version of the code is also discussed, and timing results are given for other non-vector platforms.

  20. Code-Switching and the Optimal Grammar of Bilingual Language Use

    ERIC Educational Resources Information Center

    Bhatt, Rakesh M.; Bolonyai, Agnes

    2011-01-01

    In this article, we provide a framework of bilingual grammar that offers a theoretical understanding of the socio-cognitive bases of code-switching in terms of five general principles that, individually or through interaction with each other, explain how and why specific instances of code-switching arise. We provide cross-linguistic empirical…

  1. Anode optimization for miniature electronic brachytherapy X-ray sources using Monte Carlo and computational fluid dynamic codes.

    PubMed

    Khajeh, Masoud; Safigholi, Habib

    2016-03-01

    A miniature X-ray source has been optimized for electronic brachytherapy. The cooling fluid for this device is water. Unlike the radionuclide brachytherapy sources, this source is able to operate at variable voltages and currents to match the dose with the tumor depth. First, Monte Carlo (MC) optimization was performed on the tungsten target-buffer thickness layers versus energy such that the minimum X-ray attenuation occurred. Second optimization was done on the selection of the anode shape based on the Monte Carlo in water TG-43U1 anisotropy function. This optimization was carried out to get the dose anisotropy functions closer to unity at any angle from 0° to 170°. Three anode shapes including cylindrical, spherical, and conical were considered. Moreover, by Computational Fluid Dynamic (CFD) code the optimal target-buffer shape and different nozzle shapes for electronic brachytherapy were evaluated. The characterization criteria of the CFD were the minimum temperature on the anode shape, cooling water, and pressure loss from inlet to outlet. The optimal anode was conical in shape with a conical nozzle. Finally, the TG-43U1 parameters of the optimal source were compared with the literature. PMID:26966563

  2. Anode optimization for miniature electronic brachytherapy X-ray sources using Monte Carlo and computational fluid dynamic codes

    PubMed Central

    Khajeh, Masoud; Safigholi, Habib

    2015-01-01

    A miniature X-ray source has been optimized for electronic brachytherapy. The cooling fluid for this device is water. Unlike the radionuclide brachytherapy sources, this source is able to operate at variable voltages and currents to match the dose with the tumor depth. First, Monte Carlo (MC) optimization was performed on the tungsten target-buffer thickness layers versus energy such that the minimum X-ray attenuation occurred. Second optimization was done on the selection of the anode shape based on the Monte Carlo in water TG-43U1 anisotropy function. This optimization was carried out to get the dose anisotropy functions closer to unity at any angle from 0° to 170°. Three anode shapes including cylindrical, spherical, and conical were considered. Moreover, by Computational Fluid Dynamic (CFD) code the optimal target-buffer shape and different nozzle shapes for electronic brachytherapy were evaluated. The characterization criteria of the CFD were the minimum temperature on the anode shape, cooling water, and pressure loss from inlet to outlet. The optimal anode was conical in shape with a conical nozzle. Finally, the TG-43U1 parameters of the optimal source were compared with the literature. PMID:26966563

  3. Approaches in highly parameterized inversion - PEST++, a Parameter ESTimation code optimized for large environmental models

    USGS Publications Warehouse

    Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.

    2012-01-01

    An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.

  4. Optimization of WDM lightwave systems (BAC) design using error control coding

    NASA Astrophysics Data System (ADS)

    Mruthyunjaya, H. S.; Umesh, G.; Sathish Kumar, M.

    2007-04-01

    In a binary asymmetric channel (BAC) it may be necessary to correct only those errors which result from incorrect transmission of one of the two code elements. In optical fiber multichannel systems, the optical amplifiers are critical components and amplified spontaneous emission noise in the optical amplifiers is the major source of noise in it. The property of erbium doped fiber amplifier is nearly ideal for application in lightwave long haul transmission. We investigate performance of error correcting codes in such systems in presence of stimulated Raman scattering and amplified spontaneous emission noise with asymmetric channel statistics. Performance of some best known concatenated coding schemes is reported.

  5. GPU-optimized Code for Long-term Simulations of Beam-beam Effects in Colliders

    SciTech Connect

    Roblin, Yves; Morozov, Vasiliy; Terzic, Balsa; Aturban, Mohamed A.; Ranjan, D.; Zubair, Mohammed

    2013-06-01

    We report on the development of the new code for long-term simulation of beam-beam effects in particle colliders. The underlying physical model relies on a matrix-based arbitrary-order symplectic particle tracking for beam transport and the Bassetti-Erskine approximation for beam-beam interaction. The computations are accelerated through a parallel implementation on a hybrid GPU/CPU platform. With the new code, a previously computationally prohibitive long-term simulations become tractable. We use the new code to model the proposed medium-energy electron-ion collider (MEIC) at Jefferson Lab.

  6. Optimization of GATE and PHITS Monte Carlo code parameters for spot scanning proton beam based on simulation with FLUKA general-purpose code

    NASA Astrophysics Data System (ADS)

    Kurosu, Keita; Das, Indra J.; Moskvin, Vadim P.

    2016-01-01

    Spot scanning, owing to its superior dose-shaping capability, provides unsurpassed dose conformity, in particular for complex targets. However, the robustness of the delivered dose distribution and prescription has to be verified. Monte Carlo (MC) simulation has the potential to generate significant advantages for high-precise particle therapy, especially for medium containing inhomogeneities. However, the inherent choice of computational parameters in MC simulation codes of GATE, PHITS and FLUKA that is observed for uniform scanning proton beam needs to be evaluated. This means that the relationship between the effect of input parameters and the calculation results should be carefully scrutinized. The objective of this study was, therefore, to determine the optimal parameters for the spot scanning proton beam for both GATE and PHITS codes by using data from FLUKA simulation as a reference. The proton beam scanning system of the Indiana University Health Proton Therapy Center was modeled in FLUKA, and the geometry was subsequently and identically transferred to GATE and PHITS. Although the beam transport is managed by spot scanning system, the spot location is always set at the center of a water phantom of 600 × 600 × 300 mm3, which is placed after the treatment nozzle. The percentage depth dose (PDD) is computed along the central axis using 0.5 × 0.5 × 0.5 mm3 voxels in the water phantom. The PDDs and the proton ranges obtained with several computational parameters are then compared to those of FLUKA, and optimal parameters are determined from the accuracy of the proton range, suppressed dose deviation, and computational time minimization. Our results indicate that the optimized parameters are different from those for uniform scanning, suggesting that the gold standard for setting computational parameters for any proton therapy application cannot be determined consistently since the impact of setting parameters depends on the proton irradiation technique. We

  7. [Non elective cesarean section: use of a color code to optimize management of obstetric emergencies].

    PubMed

    Rudigoz, René-Charles; Huissoud, Cyril; Delecour, Lisa; Thevenet, Simone; Dupont, Corinne

    2014-06-01

    The medical team of the Croix Rousse teaching hospital maternity unit has developed, over the last ten years, a set of procedures designed to respond to various emergency situations necessitating Caesarean section. Using the Lucas classification, we have defined as precisely as possible the degree of urgency of Caesarian sections. We have established specific protocols for the implementation of urgent and very urgent Caesarean section and have chosen a simple means to convey the degree of urgency to all team members, namely a color code system (red, orange and green). We have set time goals from decision to delivery: 15 minutes for the red code and 30 minutes for the orange code. The results seem very positive: The frequency of urgent and very urgent Caesareans has fallen over time, from 6.1 % to 1.6% in 2013. The average time from decision to delivery is 11 minutes for code red Caesareans and 21 minutes for code orange Caesareans. These time goals are now achieved in 95% of cases. Organizational and anesthetic difficulties are the main causes of delays. The indications for red and orange code Caesarians are appropriate more than two times out of three. Perinatal outcomes are generally favorable, code red Caesarians being life-saving in 15% of cases. No increase in maternal complications has been observed. In sum: Each obstetric department should have its own protocols for handling urgent and very urgent Caesarean sections. Continuous monitoring of their implementation, relevance and results should be conducted Management of extreme urgency must be integrated into the management of patients with identified risks (scarred uterus and twin pregnancies for example), and also in structures without medical facilities (birthing centers). Obstetric teams must keep in mind that implementation of these protocols in no way dispenses with close monitoring of labour. PMID:26983190

  8. Lifting scheme-based method for joint coding 3D stereo digital cinema with luminace correction and optimized prediction

    NASA Astrophysics Data System (ADS)

    Darazi, R.; Gouze, A.; Macq, B.

    2009-01-01

    Reproducing a natural and real scene as we see in the real world everyday is becoming more and more popular. Stereoscopic and multi-view techniques are used for this end. However due to the fact that more information are displayed requires supporting technologies such as digital compression to ensure the storage and transmission of the sequences. In this paper, a new scheme for stereo image coding is proposed. The original left and right images are jointly coded. The main idea is to optimally exploit the existing correlation between the two images. This is done by the design of an efficient transform that reduces the existing redundancy in the stereo image pair. This approach was inspired by Lifting Scheme (LS). The novelty in our work is that the prediction step is been replaced by an hybrid step that consists in disparity compensation followed by luminance correction and an optimized prediction step. The proposed scheme can be used for lossless and for lossy coding. Experimental results show improvement in terms of performance and complexity compared to recently proposed methods.

  9. DENSE MEDIUM CYCLONE OPTIMIZATON

    SciTech Connect

    Gerald H. Luttrell; Chris J. Barbee; Peter J. Bethell; Chris J. Wood

    2005-06-30

    Dense medium cyclones (DMCs) are known to be efficient, high-tonnage devices suitable for upgrading particles in the 50 to 0.5 mm size range. This versatile separator, which uses centrifugal forces to enhance the separation of fine particles that cannot be upgraded in static dense medium separators, can be found in most modern coal plants and in a variety of mineral plants treating iron ore, dolomite, diamonds, potash and lead-zinc ores. Due to the high tonnage, a small increase in DMC efficiency can have a large impact on plant profitability. Unfortunately, the knowledge base required to properly design and operate DMCs has been seriously eroded during the past several decades. In an attempt to correct this problem, a set of engineering tools have been developed to allow producers to improve the efficiency of their DMC circuits. These tools include (1) low-cost density tracers that can be used by plant operators to rapidly assess DMC performance, (2) mathematical process models that can be used to predict the influence of changes in operating and design variables on DMC performance, and (3) an expert advisor system that provides plant operators with a user-friendly interface for evaluating, optimizing and trouble-shooting DMC circuits. The field data required to develop these tools was collected by conducting detailed sampling and evaluation programs at several industrial plant sites. These data were used to demonstrate the technical, economic and environmental benefits that can be realized through the application of these engineering tools.

  10. A study of the optimization method used in the NAVY/NASA gas turbine engine computer code

    NASA Technical Reports Server (NTRS)

    Horsewood, J. L.; Pines, S.

    1977-01-01

    Sources of numerical noise affecting the convergence properties of the Powell's Principal Axis Method of Optimization in the NAVY/NASA gas turbine engine computer code were investigated. The principal noise source discovered resulted from loose input tolerances used in terminating iterations performed in subroutine CALCFX to satisfy specified control functions. A minor source of noise was found to be introduced by an insufficient number of digits in stored coefficients used by subroutine THERM in polynomial expressions of thermodynamic properties. Tabular results of several computer runs are presented to show the effects on program performance of selective corrective actions taken to reduce noise.

  11. Program user's manual for optimizing the design of a liquid or gaseous propellant rocket engine with the automated combustor design code AUTOCOM

    NASA Technical Reports Server (NTRS)

    Reichel, R. H.; Hague, D. S.; Jones, R. T.; Glatt, C. R.

    1973-01-01

    This computer program manual describes in two parts the automated combustor design optimization code AUTOCOM. The program code is written in the FORTRAN 4 language. The input data setup and the program outputs are described, and a sample engine case is discussed. The program structure and programming techniques are also described, along with AUTOCOM program analysis.

  12. Decoding and optimized implementation of SECDED codes over GF(q)

    DOEpatents

    Ward, H. Lee; Ganti, Anand; Resnick, David R

    2013-10-22

    A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.

  13. Decoding and optimized implementation of SECDED codes over GF(q)

    DOEpatents

    Ward, H Lee; Ganti, Anand; Resnick, David R

    2014-11-18

    A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.

  14. Design, decoding and optimized implementation of SECDED codes over GF(q)

    DOEpatents

    Ward, H Lee; Ganti, Anand; Resnick, David R

    2014-06-17

    A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.

  15. Optimization of a photoneutron source based on 10 MeV electron beam using Geant4 Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Askri, Boubaker

    2015-10-01

    Geant4 Monte Carlo code has been used to conceive and optimize a simple and compact neutron source based on a 10 MeV electron beam impinging on a tungsten target adjoined to a beryllium target. For this purpose, a precise photonuclear reaction cross-section model issued from the International Atomic Energy Agency (IAEA) database was linked to Geant4 to accurately simulate the interaction of low energy bremsstrahlung photons with beryllium material. A benchmark test showed that a good agreement was achieved when comparing the emitted neutron flux spectra predicted by Geant4 and Fluka codes for a beryllium cylinder bombarded with a 5 MeV photon beam. The source optimization was achieved through a two stage Monte Carlo simulation. In the first stage, the distributions of the seven phase space coordinates of the bremsstrahlung photons at the boundaries of the tungsten target were determined. In the second stage events corresponding to photons emitted according to these distributions were tracked. A neutron yield of 4.8 × 1010 neutrons/mA/s was obtained at 20 cm from the beryllium target. A thermal neutron yield of 1.5 × 109 neutrons/mA/s was obtained after introducing a spherical shell of polyethylene as a neutron moderator.

  16. Image gathering, coding, and processing: End-to-end optimization for efficient and robust acquisition of visual information

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; Fales, Carl L.

    1990-01-01

    Researchers are concerned with the end-to-end performance of image gathering, coding, and processing. The applications range from high-resolution television to vision-based robotics, wherever the resolution, efficiency and robustness of visual information acquisition and processing are critical. For the presentation at this workshop, it is convenient to divide research activities into the following two overlapping areas: The first is the development of focal-plane processing techniques and technology to effectively combine image gathering with coding, with an emphasis on low-level vision processing akin to the retinal processing in human vision. The approach includes the familiar Laplacian pyramid, the new intensity-dependent spatial summation, and parallel sensing/processing networks. Three-dimensional image gathering is attained by combining laser ranging with sensor-array imaging. The second is the rigorous extension of information theory and optimal filtering to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing.

  17. Integer-linear-programing optimization in scalable video multicast with adaptive modulation and coding in wireless networks.

    PubMed

    Lee, Dongyul; Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm. PMID:25276862

  18. Integer-Linear-Programing Optimization in Scalable Video Multicast with Adaptive Modulation and Coding in Wireless Networks

    PubMed Central

    Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm. PMID:25276862

  19. Performance of an Optimized Eta Model Code on the Cray T3E and a Network of PCs

    NASA Technical Reports Server (NTRS)

    Kouatchou, Jules; Rancic, Miodrag; Geiger, Jim

    2000-01-01

    In the year 2001, NASA will launch the satellite TRIANA that will be the first Earth observing mission to provide a continuous, full disk view of the sunlit Earth. As a part of the HPCC Program at NASA GSFC, we have started a project whose objectives are to develop and implement a 3D cloud data assimilation system, by combining TRIANA measurements with model simulation, and to produce accurate statistics of global cloud coverage as an important element of the Earth's climate. For simulation of the atmosphere within this project we are using the NCEP/NOAA operational Eta model. In order to compare TRIANA and the Eta model data on approximately the same grid without significant downscaling, the Eta model will be integrated at a resolution of about 15 km. The integration domain (from -70 to +70 deg in latitude and 150 deg in longitude) will cover most of the sunlit Earth disc and will continuously rotate around the globe following TRIANA. The cloud data assimilation is supposed to run and produce 3D clouds on a near real-time basis. Such a numerical setup and integration design is very ambitious and computationally demanding. Thus, though the Eta model code has been very carefully developed and its computational efficiency has been systematically polished during the years of operational implementation at NCEP, the current MPI version may still have problems with memory and efficiency for the TRIANA simulations. Within this work, we optimize a parallel version of the Eta model code on a Cray T3E and a network of PCs (theHIVE) in order to improve its overall efficiency. Our optimization procedure consists of introducing dynamically allocated arrays to reduce the size of static memory, and optimizing on a single processor by splitting loops to limit the number of streams. All the presented results are derived using an integration domain centered at the equator, with a size of 60 x 60 deg, and with horizontal resolutions of 1/2 and 1/3 deg, respectively. In accompanying

  20. Optimized FIR filters for digital pulse compression of biphase codes with low sidelobes

    NASA Astrophysics Data System (ADS)

    Sanal, M.; Kuloor, R.; Sagayaraj, M. J.

    In miniaturized radars where power, real estate, speed and low cost are tight constraints and Doppler tolerance is not a major concern biphase codes are popular and FIR filter is used for digital pulse compression (DPC) implementation to achieve required range resolution. Disadvantage of low peak to sidelobe ratio (PSR) of biphase codes can be overcome by linear programming for either single stage mismatched filter or two stage approach i.e. matched filter followed by sidelobe suppression filter (SSF) filter. Linear programming (LP) calls for longer filter lengths to obtain desirable PSR. Longer the filter length greater will be the number of multipliers, hence more will be the requirement of logic resources used in the FPGAs and many time becomes design challenge for system on chip (SoC) requirement. This requirement of multipliers can be brought down by clustering the tap weights of the filter by kmeans clustering algorithm at the cost of few dB deterioration in PSR. The cluster centroid as tap weight reduces logic used in FPGA for FIR filters to a great extent by reducing number of weight multipliers. Since k-means clustering is an iterative algorithm, centroid for weights cluster is different in different iterations and causes different clusters. This causes difference in clustering of weights and sometimes even it may happen that lesser number of multiplier and lesser length of filter provide better PSR.

  1. MagRad: A code to optimize the operation of superconducting magnets in a radiation environment

    SciTech Connect

    Yeaw, C.T.

    1995-12-31

    A powerful computational tool, called MagRad, has been developed which optimizes magnet design for operation in radiation fields. Specifically, MagRad has been used for the analysis and design modification of the cable-in-conduit conductors of the TF magnet systems in fusion reactor designs. Since the TF magnets must operate in a radiation environment which damages the material components of the conductor and degrades their performance, the optimization of conductor design must account not only for start-up magnet performance, but also shut-down performance. The degradation in performance consists primarily of three effects: reduced stability margin of the conductor; a transition out of the well-cooled operating regime; and an increased maximum quench temperature attained in the conductor. Full analysis of the magnet performance over the lifetime of the reactor includes: radiation damage to the conductor, stability, protection, steady state heat removal, shielding effectiveness, optimal annealing schedules, and finally costing of the magnet and reactor. Free variables include primary and secondary conductor geometric and compositional parameters, as well as fusion reactor parameters. A means of dealing with the radiation damage to the conductor, namely high temperature superconductor anneals, is proposed, examined, and demonstrated to be both technically feasible and cost effective. Additionally, two relevant reactor designs (ITER CDA and ARIES-II/IV) have been analyzed. Upon addition of pure copper strands to the cable, the ITER CDA TF magnet design was found to be marginally acceptable, although much room for both performance improvement and cost reduction exists. A cost reduction of 10-15% of the capital cost of the reactor can be achieved by adopting a suitable superconductor annealing schedule. In both of these reactor analyses, the performance predictive capability of MagRad and its associated costing techniques have been demonstrated.

  2. Combined optimal quantization and lossless coding of digital holograms of three-dimensional objects

    NASA Astrophysics Data System (ADS)

    Shortt, Alison E.; Naughton, Thomas J.; Javidi, Bahram

    2006-10-01

    Digital holography is an inherently three-dimensional (3D) technique for the capture of real-world objects. Many existing 3D imaging and processing techniques are based on the explicit combination of several 2D perspectives (or light stripes, etc.) through digital image processing. The advantage of recording a hologram is that multiple 2D perspectives can be optically combined in parallel, and in a constant number of steps independent of the hologram size. Although holography and its capabilities have been known for many decades, it is only very recently that digital holography has been practically investigated due to the recent development of megapixel digital sensors with sufficient spatial resolution and dynamic range. The applications of digital holography could include 3D television, virtual reality, and medical imaging. If these applications are realized, compression standards will have to be defined. We outline the techniques that have been proposed to date for the compression of digital hologram data and show that they are comparable to the performance of what in communication theory is known as optimal signal quantization. We adapt the optimal signal quantization technique to complex-valued 2D signals. The technique relies on knowledge of the histograms of real and imaginary values in the digital holograms. Our digital holograms of 3D objects are captured using phase-shift interferometry. We complete the compression procedure by applying lossless techniques to the quantized holographic pixels.

  3. Method for dense packing discovery

    NASA Astrophysics Data System (ADS)

    Kallus, Yoav; Elser, Veit; Gravel, Simon

    2010-11-01

    The problem of packing a system of particles as densely as possible is foundational in the field of discrete geometry and is a powerful model in the material and biological sciences. As packing problems retreat from the reach of solution by analytic constructions, the importance of an efficient numerical method for conducting de novo (from-scratch) searches for dense packings becomes crucial. In this paper, we use the divide and concur framework to develop a general search method for the solution of periodic constraint problems, and we apply it to the discovery of dense periodic packings. An important feature of the method is the integration of the unit-cell parameters with the other packing variables in the definition of the configuration space. The method we present led to previously reported improvements in the densest-known tetrahedron packing. Here, we use the method to reproduce the densest-known lattice sphere packings and the best-known lattice kissing arrangements in up to 14 and 11 dimensions, respectively, providing numerical evidence for their optimality. For nonspherical particles, we report a dense packing of regular four-dimensional simplices with density ϕ=128/219≈0.5845 and with a similar structure to the densest-known tetrahedron packing.

  4. Optimized quadtree for Karhunen-Loeve transform in multispectral image coding.

    PubMed

    Lee, J

    1999-01-01

    A new multispectral image compression technique based on the Karhunen-Loeve transform (KLT) and the discrete cosine transform (DCT) is proposed. The quadtree for determining the transform block size and the quantizer for encoding the transform coefficients are jointly optimized in a rate-distortion sense. The problem is solved by a Lagrange multiplier approach. After a quadtree is determined by this approach, a one-dimensional (1-D) KLT is applied to the spectral axis for each block before the DCT is applied on the spatial domain. The eigenvectors of the autocovariance matrix, the quantization scale, and the quantized transform coefficients for each block are the output of the encoder. The overhead information required in this scheme is the bits for the quadtree, KLT, and quantizer representation. PMID:18262890

  5. ROCOPT: A user friendly interactive code to optimize rocket structural components

    NASA Technical Reports Server (NTRS)

    Rule, William K.

    1989-01-01

    ROCOPT is a user-friendly, graphically-interfaced, microcomputer-based computer program (IBM compatible) that optimizes rocket components by minimizing the structural weight. The rocket components considered are ring stiffened truncated cones and cylinders. The applied loading is static, and can consist of any combination of internal or external pressure, axial force, bending moment, and torque. Stress margins are calculated by means of simple closed form strength of material type equations. Stability margins are determined by approximate, orthotropic-shell, closed-form equations. A modified form of Powell's method, in conjunction with a modified form of the external penalty method, is used to determine the minimum weight of the structure subject to stress and stability margin constraints, as well as user input constraints on the structural dimensions. The graphical interface guides the user through the required data prompts, explains program options and graphically displays results for easy interpretation.

  6. A comprehensive method for preliminary design optimization of axial gas turbine stages. II - Code verification

    NASA Technical Reports Server (NTRS)

    Jenkins, R. M.

    1983-01-01

    The present effort represents an extension of previous work wherein a calculation model for performing rapid pitchline optimization of axial gas turbine geometry, including blade profiles, is developed. The model requires no specification of geometric constraints. Output includes aerodynamic performance (adiabatic efficiency), hub-tip flow-path geometry, blade chords, and estimates of blade shape. Presented herein is a verification of the aerodynamic performance portion of the model, whereby detailed turbine test-rig data, including rig geometry, is input to the model to determine whether tested performance can be predicted. An array of seven (7) NASA single-stage axial gas turbine configurations is investigated, ranging in size from 0.6 kg/s to 63.8 kg/s mass flow and in specific work output from 153 J/g to 558 J/g at design (hot) conditions; stage loading factor ranges from 1.15 to 4.66.

  7. Optimal coding-decoding for systems controlled via a communication channel

    NASA Astrophysics Data System (ADS)

    Yi-wei, Feng; Guo, Ge

    2013-12-01

    In this article, we study the problem of controlling plants over a signal-to-noise ratio (SNR) constrained communication channel. Different from previous research, this article emphasises the importance of the actual channel model and coder/decoder in the study of network performance. Our major objectives include coder/decoder design for an additive white Gaussian noise (AWGN) channel with both standard network configuration and Youla parameter network architecture. We find that the optimal coder and decoder can be realised for different network configuration. The results are useful in determining the minimum channel capacity needed in order to stabilise plants over communication channels. The coder/decoder obtained can be used to analyse the effect of uncertainty on the channel capacity. An illustrative example is provided to show the effectiveness of the results.

  8. Steps towards verification and validation of the Fetch code for Level 2 analysis, design, and optimization of aqueous homogeneous reactors

    SciTech Connect

    Nygaard, E. T.; Pain, C. C.; Eaton, M. D.; Gomes, J. L. M. A.; Goddard, A. J. H.; Gorman, G.; Tollit, B.; Buchan, A. G.; Cooling, C. M.; Angelo, P. L.

    2012-07-01

    Babcock and Wilcox Technical Services Group (B and W) has identified aqueous homogeneous reactors (AHRs) as a technology well suited to produce the medical isotope molybdenum 99 (Mo-99). AHRs have never been specifically designed or built for this specialized purpose. However, AHRs have a proven history of being safe research reactors. In fact, in 1958, AHRs had 'a longer history of operation than any other type of research reactor using enriched fuel' and had 'experimentally demonstrated to be among the safest of all various type of research reactor now in use [1].' While AHRs have been modeled effectively using simplified 'Level 1' tools, the complex interactions between fluids, neutronics, and solid structures are important (but not necessarily safety significant). These interactions require a 'Level 2' modeling tool. Imperial College London (ICL) has developed such a tool: Finite Element Transient Criticality (FETCH). FETCH couples the radiation transport code EVENT with the computational fluid dynamics code (Fluidity), the result is a code capable of modeling sub-critical, critical, and super-critical solutions in both two-and three-dimensions. Using FETCH, ICL researchers and B and W engineers have studied many fissioning solution systems include the Tokaimura criticality accident, the Y12 accident, SILENE, TRACY, and SUPO. These modeling efforts will ultimately be incorporated into FETCH'S extensive automated verification and validation (V and V) test suite expanding FETCH'S area of applicability to include all relevant physics associated with AHRs. These efforts parallel B and W's engineering effort to design and optimize an AHR to produce Mo99. (authors)

  9. Using Microsoft Excel as a pre-processor for CODE V optimization of air spaces when building camera lenses

    NASA Astrophysics Data System (ADS)

    Stephenson, Dave

    2013-09-01

    When building high-performance camera lenses, it is often preferable to tailor element-to-element air spaces instead of tightening the fabrication tolerances sufficiently so that random assembly is possible. A tailored air space solution is usually unique for each serial number camera lens and results in nearly nominal performance. When these air spaces are computed based on measured radii, thickness, and refractive indices, this can put a strain on the design engineering department to deal with all the data in a timely fashion. Excel† may be used by the assembly technician as a preprocessor tool to facilitate data entry and organization, and to perform the optimization using CODE V‡ (or equivalent) without any training or experience in using lens design software. This makes it unnecessary to involve design engineering for each lens serial number, sometimes waiting in their work queue. In addition, Excel can be programmed to run CODE V in such a way that discrete shim thicknesses result. This makes it possible for each tailored air space solution to be achieved using a finite number of shims that differ in thickness by a reasonable amount. It is generally not necessary to tailor the air spaces in each lens to the micron level to achieve nearly nominal performance.

  10. Motion estimation optimization in a MPEG-1-like video coding scheme for low-bit-rate applications

    NASA Astrophysics Data System (ADS)

    Roser, Miguel; Villegas, Paulo

    1994-05-01

    In this paper we present a work based on a coding algorithm for visual information that follows the International Standard ISO-IEC IS 11172, `Coding of Moving Pictures and Associated Audio for Digital Storage Media up to about 1.5 Mbit/s', widely known as MPEG1. The main intention in the definition of the MPEG 1 standard was to provide a large degree of flexibility to be used in many different applications. The interest of this paper is to adapt the MPEG 1 scheme for low bitrate operation and optimize it for special situations, as for example, a talking head with low movement, which is a usual situation in videotelephony application. An adapted and compatible MPEG 1 scheme, previously developed, able to operate at px8 Kbit/s will be used in this work. Looking for a low complexity scheme and taking into account that the most expensive (from the point of view of consumed computer time) step in the scheme is the motion estimation process (almost 80% of the total computer time is spent on the ME), an improvement of the motion estimation module based on the use of a new search pattern is presented in this paper.

  11. Optimizing color fidelity for display devices using contour phase predictive coding for text, graphics, and video content

    NASA Astrophysics Data System (ADS)

    Lebowsky, Fritz

    2013-02-01

    High-end monitors and TVs based on LCD technology continue to increase their native display resolution to 4k2k and beyond. Subsequently, uncompressed pixel data transmission becomes costly when transmitting over cable or wireless communication channels. For motion video content, spatial preprocessing from YCbCr 444 to YCbCr 420 is widely accepted. However, due to spatial low pass filtering in horizontal and vertical direction, quality and readability of small text and graphics content is heavily compromised when color contrast is high in chrominance channels. On the other hand, straight forward YCbCr 444 compression based on mathematical error coding schemes quite often lacks optimal adaptation to visually significant image content. Therefore, we present the idea of detecting synthetic small text fonts and fine graphics and applying contour phase predictive coding for improved text and graphics rendering at the decoder side. Using a predictive parametric (text) contour model and transmitting correlated phase information in vector format across all three color channels combined with foreground/background color vectors of a local color map promises to overcome weaknesses in compression schemes that process luminance and chrominance channels separately. The residual error of the predictive model is being minimized more easily since the decoder is an integral part of the encoder. A comparative analysis based on some competitive solutions highlights the effectiveness of our approach, discusses current limitations with regard to high quality color rendering, and identifies remaining visual artifacts.

  12. [A comparison of the knockout efficiencies of two codon-optimized Cas9 coding sequences in zebrafish embryos].

    PubMed

    Fenghua, Zhang; Houpeng, Wang; Siyu, Huang; Feng, Xiong; Zuoyan, Zhu; Yonghua, Sun

    2016-02-01

    Recent years have witnessed the rapid development of the clustered regularly interspaced short palindromic repeats/CRISPR-associated protein(CRISPR/Cas9)system. In order to realize gene knockout with high efficiency and specificity in zebrafish, several labs have synthesized distinct Cas9 cDNA sequences which were cloned into different vectors. In this study, we chose two commonly used zebrafish-codon-optimized Cas9 coding sequences (zCas9_bz, zCas9_wc) from two different labs, and utilized them to knockout seven genes in zebrafish embryos, including the exogenous egfp and six endogenous genes (chd, hbegfa, th, eef1a1b, tyr and tcf7l1a). We compared the knockout efficiencies resulting from the two zCas9 coding sequences, by direct sequencing of PCR products, colony sequencing and phenotypic analysis. The results showed that the knockout efficiency of zCas9_wc was higher than that of zCas9_bz in all conditions. PMID:26907778

  13. Laser-induced fusion in ultra-dense deuterium D(-1): Optimizing MeV particle emission by carrier material selection

    NASA Astrophysics Data System (ADS)

    Holmlid, Leif

    2013-02-01

    Power generation by laser-induced nuclear fusion in ultra-dense deuterium D(-1) requires that the carrier material interacts correctly with D(-1) prior to the laser pulse and also during the laser pulse. In previous studies, the interaction between the superfluid D(-1) layer and various carrier materials prior to the laser pulse has been investigated. It was shown that organic polymer materials do not give a condensed D(-1) layer. Metal surfaces carry thicker D(-1) layers useful for fusion. Here, the interaction between the carrier and the nuclear fusion process is investigated by observing the MeV particle emission (e.g. 14 MeV protons) using twelve different carrier materials and two different methods of detection. Several factors have been analyzed for the performance of the carrier materials: the hardness and the melting point of the material, and the chemical properties of the surface layer. The best performance is found for the high-melting metals Ti and Ta, but also Cu performs well as carrier despite its low melting point. The unexpectedly meager performance of Ni and Ir may be due to their catalytic activity towards hydrogen which may give atomic association to deuterium molecules at the low D2 pressure used.

  14. Atoms in dense plasmas

    SciTech Connect

    More, R.M.

    1986-01-01

    Recent experiments with high-power pulsed lasers have strongly encouraged the development of improved theoretical understanding of highly charged ions in a dense plasma environment. This work examines the theory of dense plasmas with emphasis on general rules which govern matter at extreme high temperature and density. 106 refs., 23 figs.

  15. A four-column theory for the origin of the genetic code: tracing the evolutionary pathways that gave rise to an optimized code

    PubMed Central

    Higgs, Paul G

    2009-01-01

    Background The arrangement of the amino acids in the genetic code is such that neighbouring codons are assigned to amino acids with similar physical properties. Hence, the effects of translational error are minimized with respect to randomly reshuffled codes. Further inspection reveals that it is amino acids in the same column of the code (i.e. same second base) that are similar, whereas those in the same row show no particular similarity. We propose a 'four-column' theory for the origin of the code that explains how the action of selection during the build-up of the code leads to a final code that has the observed properties. Results The theory makes the following propositions. (i) The earliest amino acids in the code were those that are easiest to synthesize non-biologically, namely Gly, Ala, Asp, Glu and Val. (ii) These amino acids are assigned to codons with G at first position. Therefore the first code may have used only these codons. (iii) The code rapidly developed into a four-column code where all codons in the same column coded for the same amino acid: NUN = Val, NCN = Ala, NAN = Asp and/or Glu, and NGN = Gly. (iv) Later amino acids were added sequentially to the code by a process of subdivision of codon blocks in which a subset of the codons assigned to an early amino acid were reassigned to a later amino acid. (v) Later amino acids were added into positions formerly occupied by amino acids with similar properties because this can occur with minimal disruption to the proteins already encoded by the earlier code. As a result, the properties of the amino acids in the final code retain a four-column pattern that is a relic of the earliest stages of code evolution. Conclusion The driving force during this process is not the minimization of translational error, but positive selection for the increased diversity and functionality of the proteins that can be made with a larger amino acid alphabet. Nevertheless, the code that results is one in which translational

  16. Kinetic Simulations of Dense Plasma Focus Breakdown

    NASA Astrophysics Data System (ADS)

    Schmidt, A.; Higginson, D. P.; Jiang, S.; Link, A.; Povilus, A.; Sears, J.; Bennett, N.; Rose, D. V.; Welch, D. R.

    2015-11-01

    A dense plasma focus (DPF) device is a type of plasma gun that drives current through a set of coaxial electrodes to assemble gas inside the device and then implode that gas on axis to form a Z-pinch. This implosion drives hydrodynamic and kinetic instabilities that generate strong electric fields, which produces a short intense pulse of x-rays, high-energy (>100 keV) electrons and ions, and (in deuterium gas) neutrons. A strong factor in pinch performance is the initial breakdown and ionization of the gas along the insulator surface separating the two electrodes. The smoothness and isotropy of this ionized sheath are imprinted on the current sheath that travels along the electrodes, thus making it an important portion of the DPF to both understand and optimize. Here we use kinetic simulations in the Particle-in-cell code LSP to model the breakdown. Simulations are initiated with neutral gas and the breakdown modeled self-consistently as driven by a charged capacitor system. We also investigate novel geometries for the insulator and electrodes to attempt to control the electric field profile. The initial ionization fraction of gas is explored computationally to gauge possible advantages of pre-ionization which could be created experimentally via lasers or a glow-discharge. Prepared by LLNL under Contract DE-AC52-07NA27344.

  17. Real-time photoacoustic and ultrasound dual-modality imaging system facilitated with graphics processing unit and code parallel optimization

    NASA Astrophysics Data System (ADS)

    Yuan, Jie; Xu, Guan; Yu, Yao; Zhou, Yu; Carson, Paul L.; Wang, Xueding; Liu, Xiaojun

    2013-08-01

    Photoacoustic tomography (PAT) offers structural and functional imaging of living biological tissue with highly sensitive optical absorption contrast and excellent spatial resolution comparable to medical ultrasound (US) imaging. We report the development of a fully integrated PAT and US dual-modality imaging system, which performs signal scanning, image reconstruction, and display for both photoacoustic (PA) and US imaging all in a truly real-time manner. The back-projection (BP) algorithm for PA image reconstruction is optimized to reduce the computational cost and facilitate parallel computation on a state of the art graphics processing unit (GPU) card. For the first time, PAT and US imaging of the same object can be conducted simultaneously and continuously, at a real-time frame rate, presently limited by the laser repetition rate of 10 Hz. Noninvasive PAT and US imaging of human peripheral joints in vivo were achieved, demonstrating the satisfactory image quality realized with this system. Another experiment, simultaneous PAT and US imaging of contrast agent flowing through an artificial vessel, was conducted to verify the performance of this system for imaging fast biological events. The GPU-based image reconstruction software code for this dual-modality system is open source and available for download from http://sourceforge.net/projects/patrealtime.

  18. A real-time photoacoustic and ultrasound dual-modality imaging system facilitated with GPU and code parallel optimization

    NASA Astrophysics Data System (ADS)

    Yuan, Jie; Xu, Guan; Yu, Yao; Zhou, Yu; Carson, Paul L.; Wang, Xueding; Liu, Xiaojun

    2014-03-01

    Photoacoustic tomography (PAT) offers structural and functional imaging of living biological tissue with highly sensitive optical absorption contrast and excellent spatial resolution comparable to medical ultrasound (US) imaging. We report the development of a fully integrated PAT and US dual-modality imaging system, which performs signal scanning, image reconstruction and display for both photoacoustic (PA) and US imaging all in a truly real-time manner. The backprojection (BP) algorithm for PA image reconstruction is optimized to reduce the computational cost and facilitate parallel computation on a state of the art graphics processing unit (GPU) card. For the first time, PAT and US imaging of the same object can be conducted simultaneously and continuously, at a real time frame rate, presently limited by the laser repetition rate of 10 Hz. Noninvasive PAT and US imaging of human peripheral joints in vivo were achieved, demonstrating the satisfactory image quality realized with this system. Another experiment, simultaneous PAT and US imaging of contrast agent flowing through an artificial vessel was conducted to verify the performance of this system for imaging fast biological events. The GPU based image reconstruction software code for this dual-modality system is open source and available for download from http://sourceforge.net/projects/pat realtime .

  19. User's guide for the BNW-III optimization code for modular dry/wet-cooled power plants

    SciTech Connect

    Braun, D.J.; Faletti, D.W.

    1984-09-01

    This user's guide describes BNW-III, a computer code developed by the Pacific Northwest Laboratory (PNL) as part of the Dry Cooling Enhancement Program sponsored by the US Department of Energy (DOE). The BNW-III code models a modular dry/wet cooling system for a nuclear or fossil fuel power plant. The purpose of this guide is to give the code user a brief description of what the BNW-III code is and how to use it. It describes the cooling system being modeled and the various models used. A detailed description of code input and code output is also included. The BNW-III code was developed to analyze a specific cooling system layout. However, there is a large degree of freedom in the type of cooling modules that can be selected and in the performance of those modules. The costs of the modules are input to the code, giving the user a great deal of flexibility.

  20. Optimization of Grit-Blasting Process Parameters for Production of Dense Coatings on Open Pores Metallic Foam Substrates Using Statistical Methods

    NASA Astrophysics Data System (ADS)

    Salavati, S.; Coyle, T. W.; Mostaghimi, J.

    2015-10-01

    Open pore metallic foam core sandwich panels prepared by thermal spraying of a coating on the foam structures can be used as high-efficiency heat transfer devices due to their high surface area to volume ratio. The structural, mechanical, and physical properties of thermally sprayed skins play a significant role in the performance of the related devices. These properties are mainly controlled by the porosity content, oxide content, adhesion strength, and stiffness of the deposited coating. In this study, the effects of grit-blasting process parameters on the characteristics of the temporary surface created on the metallic foam substrate and on the twin-wire arc-sprayed alloy 625 coating subsequently deposited on the foam were investigated through response surface methodology. Characterization of the prepared surface and sprayed coating was conducted by scanning electron microscopy, roughness measurements, and adhesion testing. Using statistical design of experiments, response surface method, a model was developed to predict the effect of grit-blasting parameters on the surface roughness of the prepared foam and also the porosity content of the sprayed coating. The coating porosity and adhesion strength were found to be determined by the substrate surface roughness, which could be controlled by grit-blasting parameters. Optimization of the grit-blasting parameters was conducted using the fitted model to minimize the porosity content of the coating while maintaining a high adhesion strength.

  1. Dense high temperature ceramic oxide superconductors

    DOEpatents

    Landingham, R.L.

    1993-10-12

    Dense superconducting ceramic oxide articles of manufacture and methods for producing these articles are described. Generally these articles are produced by first processing these superconducting oxides by ceramic processing techniques to optimize materials properties, followed by reestablishing the superconducting state in a desired portion of the ceramic oxide composite.

  2. Dense high temperature ceramic oxide superconductors

    DOEpatents

    Landingham, Richard L.

    1993-01-01

    Dense superconducting ceramic oxide articles of manufacture and methods for producing these articles are described. Generally these articles are produced by first processing these superconducting oxides by ceramic processing techniques to optimize materials properties, followed by reestablishing the superconducting state in a desired portion of the ceramic oxide composite.

  3. Optimized and secure technique for multiplexing QR code images of single characters: application to noiseless messages retrieval

    NASA Astrophysics Data System (ADS)

    Trejos, Sorayda; Fredy Barrera, John; Torroba, Roberto

    2015-08-01

    We present for the first time an optical encrypting-decrypting protocol for recovering messages without speckle noise. This is a digital holographic technique using a 2f scheme to process QR codes entries. In the procedure, letters used to compose eventual messages are individually converted into a QR code, and then each QR code is divided into portions. Through a holographic technique, we store each processed portion. After filtering and repositioning, we add all processed data to create a single pack, thus simplifying the handling and recovery of multiple QR code images, representing the first multiplexing procedure applied to processed QR codes. All QR codes are recovered in a single step and in the same plane, showing neither cross-talk nor noise problems as in other methods. Experiments have been conducted using an interferometric configuration and comparisons between unprocessed and recovered QR codes have been performed, showing differences between them due to the involved processing. Recovered QR codes can be successfully scanned, thanks to their noise tolerance. Finally, the appropriate sequence in the scanning of the recovered QR codes brings a noiseless retrieved message. Additionally, to procure maximum security, the multiplexed pack could be multiplied by a digital diffuser as to encrypt it. The encrypted pack is easily decoded by multiplying the multiplexing with the complex conjugate of the diffuser. As it is a digital operation, no noise is added. Therefore, this technique is threefold robust, involving multiplexing, encryption, and the need of a sequence to retrieve the outcome.

  4. Earthquake source inversion with dense networks

    NASA Astrophysics Data System (ADS)

    Somala, S.; Ampuero, J. P.; Lapusta, N.

    2012-12-01

    Inversions of earthquake source slip from the recorded ground motions typically impose a number of restrictions on the source parameterization, which are needed to stabilize the inverse problem with sparse data. Such restrictions may include smoothing, causality considerations, predetermined shapes of the local source-time function, and constant rupture speed. The goal of our work is to understand whether the inversion results could be substantially improved by the availability of much denser sensor networks than currently available. The best regional networks have sensor spacing in the tens of kilometers range, much larger than the wavelengths relevant to key aspects of earthquake physics. Novel approaches to providing orders-of-magnitude denser sensing include low-cost sensors (Community Seismic Network) and space-based optical imaging (Geostationary Optical Seismometer). However, in both cases, the density of sensors comes at the expense of accuracy. Inversions that involve large number of sensors are intractable with the current source inversion codes. Hence we are developing a new approach that can handle thousands of sensors. It employs iterative conjugate gradient optimization based on an adjoint method and involves iterative time-reversed 3D wave propagation simulations using the spectral element method (SPECFEM3D). To test the developed method, and to investigate the effect of sensor density and quality on the inversion results, we have been considering kinematic and dynamic synthetic sources of several types: one or more Haskell pulses with various widths and spacings; scenarios with local rupture propagation in the opposite direction (as observed during the 2010 El Mayor-Cucapah earthquake); dynamic crack-like rupture, both subshear and supershear; and rupture that mimics supershear propagation by jumping along the fault. In each case, we produce the data by a forward SPECFEM3D calculation, choose the desired density of stations, filter the data to 1 Hz

  5. User's manual for DELSOL2: a computer code for calculating the optical performance and optimal system design for solar-thermal central-receiver plants

    SciTech Connect

    Dellin, T.A.; Fish, M.J.; Yang, C.L.

    1981-08-01

    DELSOL2 is a revised and substantially extended version of the DELSOL computer program for calculating collector field performance and layout, and optimal system design for solar thermal central receiver plants. The code consists of a detailed model of the optical performance, a simpler model of the non-optical performance, an algorithm for field layout, and a searching algorithm to find the best system design. The latter two features are coupled to a cost model of central receiver components and an economic model for calculating energy costs. The code can handle flat, focused and/or canted heliostats, and external cylindrical, multi-aperture cavity, and flat plate receivers. The program optimizes the tower height, receiver size, field layout, heliostat spacings, and tower position at user specified power levels subject to flux limits on the receiver and land constraints for field layout. The advantages of speed and accuracy characteristic of Version I are maintained in DELSOL2.

  6. Computational electromagnetics and parallel dense matrix computations

    SciTech Connect

    Forsman, K.; Kettunen, L.; Gropp, W.; Levine, D.

    1995-06-01

    We present computational results using CORAL, a parallel, three-dimensional, nonlinear magnetostatic code based on a volume integral equation formulation. A key feature of CORAL is the ability to solve, in parallel, the large, dense systems of linear equations that are inherent in the use of integral equation methods. Using the Chameleon and PSLES libraries ensures portability and access to the latest linear algebra solution technology.

  7. Memory-efficient table look-up optimized algorithm for context-based adaptive variable length decoding in H.264/advanced video coding

    NASA Astrophysics Data System (ADS)

    Wang, Jianhua; Cheng, Lianglun; Wang, Tao; Peng, Xiaodong

    2016-03-01

    Table look-up operation plays a very important role during the decoding processing of context-based adaptive variable length decoding (CAVLD) in H.264/advanced video coding (AVC). However, frequent table look-up operation can result in big table memory access, and then lead to high table power consumption. Aiming to solve the problem of big table memory access of current methods, and then reduce high power consumption, a memory-efficient table look-up optimized algorithm is presented for CAVLD. The contribution of this paper lies that index search technology is introduced to reduce big memory access for table look-up, and then reduce high table power consumption. Specifically, in our schemes, we use index search technology to reduce memory access by reducing the searching and matching operations for code_word on the basis of taking advantage of the internal relationship among length of zero in code_prefix, value of code_suffix and code_lengh, thus saving the power consumption of table look-up. The experimental results show that our proposed table look-up algorithm based on index search can lower about 60% memory access consumption compared with table look-up by sequential search scheme, and then save much power consumption for CAVLD in H.264/AVC.

  8. Multi-Sensor Detection with Particle Swarm Optimization for Time-Frequency Coded Cooperative WSNs Based on MC-CDMA for Underground Coal Mines

    PubMed Central

    Xu, Jingjing; Yang, Wei; Zhang, Linyuan; Han, Ruisong; Shao, Xiaotao

    2015-01-01

    In this paper, a wireless sensor network (WSN) technology adapted to underground channel conditions is developed, which has important theoretical and practical value for safety monitoring in underground coal mines. According to the characteristics that the space, time and frequency resources of underground tunnel are open, it is proposed to constitute wireless sensor nodes based on multicarrier code division multiple access (MC-CDMA) to make full use of these resources. To improve the wireless transmission performance of source sensor nodes, it is also proposed to utilize cooperative sensors with good channel conditions from the sink node to assist source sensors with poor channel conditions. Moreover, the total power of the source sensor and its cooperative sensors is allocated on the basis of their channel conditions to increase the energy efficiency of the WSN. To solve the problem that multiple access interference (MAI) arises when multiple source sensors transmit monitoring information simultaneously, a kind of multi-sensor detection (MSD) algorithm with particle swarm optimization (PSO), namely D-PSO, is proposed for the time-frequency coded cooperative MC-CDMA WSN. Simulation results show that the average bit error rate (BER) performance of the proposed WSN in an underground coal mine is improved significantly by using wireless sensor nodes based on MC-CDMA, adopting time-frequency coded cooperative transmission and D-PSO algorithm with particle swarm optimization. PMID:26343660

  9. Multi-Sensor Detection with Particle Swarm Optimization for Time-Frequency Coded Cooperative WSNs Based on MC-CDMA for Underground Coal Mines.

    PubMed

    Xu, Jingjing; Yang, Wei; Zhang, Linyuan; Han, Ruisong; Shao, Xiaotao

    2015-01-01

    In this paper, a wireless sensor network (WSN) technology adapted to underground channel conditions is developed, which has important theoretical and practical value for safety monitoring in underground coal mines. According to the characteristics that the space, time and frequency resources of underground tunnel are open, it is proposed to constitute wireless sensor nodes based on multicarrier code division multiple access (MC-CDMA) to make full use of these resources. To improve the wireless transmission performance of source sensor nodes, it is also proposed to utilize cooperative sensors with good channel conditions from the sink node to assist source sensors with poor channel conditions. Moreover, the total power of the source sensor and its cooperative sensors is allocated on the basis of their channel conditions to increase the energy efficiency of the WSN. To solve the problem that multiple access interference (MAI) arises when multiple source sensors transmit monitoring information simultaneously, a kind of multi-sensor detection (MSD) algorithm with particle swarm optimization (PSO), namely D-PSO, is proposed for the time-frequency coded cooperative MC-CDMA WSN. Simulation results show that the average bit error rate (BER) performance of the proposed WSN in an underground coal mine is improved significantly by using wireless sensor nodes based on MC-CDMA, adopting time-frequency coded cooperative transmission and D-PSO algorithm with particle swarm optimization. PMID:26343660

  10. Homological stabilizer codes

    SciTech Connect

    Anderson, Jonas T.

    2013-03-15

    In this paper we define homological stabilizer codes on qubits which encompass codes such as Kitaev's toric code and the topological color codes. These codes are defined solely by the graphs they reside on. This feature allows us to use properties of topological graph theory to determine the graphs which are suitable as homological stabilizer codes. We then show that all toric codes are equivalent to homological stabilizer codes on 4-valent graphs. We show that the topological color codes and toric codes correspond to two distinct classes of graphs. We define the notion of label set equivalencies and show that under a small set of constraints the only homological stabilizer codes without local logical operators are equivalent to Kitaev's toric code or to the topological color codes. - Highlights: Black-Right-Pointing-Pointer We show that Kitaev's toric codes are equivalent to homological stabilizer codes on 4-valent graphs. Black-Right-Pointing-Pointer We show that toric codes and color codes correspond to homological stabilizer codes on distinct graphs. Black-Right-Pointing-Pointer We find and classify all 2D homological stabilizer codes. Black-Right-Pointing-Pointer We find optimal codes among the homological stabilizer codes.

  11. Dense suspension splash

    NASA Astrophysics Data System (ADS)

    Dodge, Kevin M.; Peters, Ivo R.; Ellowitz, Jake; Schaarsberg, Martin H. Klein; Jaeger, Heinrich M.; Zhang, Wendy W.

    2014-11-01

    Impact of a dense suspension drop onto a solid surface at speeds of several meters-per-second splashes by ejecting individual liquid-coated particles. Suppression or reduction of this splash is important for thermal spray coating and additive manufacturing. Accomplishing this aim requires distinguishing whether the splash is generated by individual scattering events or by collective motion reminiscent of liquid flow. Since particle inertia dominates over surface tension and viscous drag in a strong splash, we model suspension splash using a discrete-particle simulation in which the densely packed macroscopic particles experience inelastic collisions but zero friction or cohesion. Numerical results based on this highly simplified model are qualitatively consistent with observations. They also show that approximately 70% of the splash is generated by collective motion. Here an initially downward-moving particle is ejected into the splash because it experiences a succession of low-momentum-change collisions whose effects do not cancel but instead accumulate. The remainder of the splash is generated by scattering events in which a small number of high-momentum-change collisions cause a particle to be ejected upwards. Current Address: Physics of Fluids Group, University of Twente, P.O. Box 217, 7500 AE Enschede, The Netherlands.

  12. Warm dense crystallography

    NASA Astrophysics Data System (ADS)

    Valenza, Ryan A.; Seidler, Gerald T.

    2016-03-01

    The intense femtosecond-scale pulses from x-ray free electron lasers (XFELs) are able to create and interrogate interesting states of matter characterized by long-lived nonequilibrium semicore or core electron occupancies or by the heating of dense phases via the relaxation cascade initiated by the photoelectric effect. We address here the latter case of "warm dense matter" (WDM) and investigate the observable consequences of x-ray heating of the electronic degrees of freedom in crystalline systems. We report temperature-dependent density functional theory calculations for the x-ray diffraction from crystalline LiF, graphite, diamond, and Be. We find testable, strong signatures of condensed-phase effects that emphasize the importance of wide-angle scattering to study nonequilibrium states. These results also suggest that the reorganization of the valence electron density at eV-scale temperatures presents a confounding factor to achieving atomic resolution in macromolecular serial femtosecond crystallography (SFX) studies at XFELs, as performed under the "diffract before destroy" paradigm.

  13. Brain-Generated Estradiol Drives Long-Term Optimization of Auditory Coding to Enhance the Discrimination of Communication Signals

    PubMed Central

    Tremere, Liisa A.; Pinaud, Raphael

    2011-01-01

    Auditory processing and hearing-related pathologies are heavily influenced by steroid hormones in a variety of vertebrate species including humans. The hormone estradiol has been recently shown to directly modulate the gain of central auditory neurons, in real-time, by controlling the strength of inhibitory transmission via a non-genomic mechanism. The functional relevance of this modulation, however, remains unknown. Here we show that estradiol generated in the songbird homologue of the mammalian auditory association cortex, rapidly enhances the effectiveness of the neural coding of complex, learned acoustic signals in awake zebra finches. Specifically, estradiol increases mutual information rates, coding efficiency and the neural discrimination of songs. These effects are mediated by estradiol’s modulation of both rate and temporal coding of auditory signals. Interference with the local action or production of estradiol in the auditory forebrain of freely-behaving animals disrupts behavioral responses to songs, but not to other behaviorally-relevant communication signals. Our findings directly show that estradiol is a key regulator of auditory function in the adult vertebrate brain. PMID:21368039

  14. Validation of a computer code for analysis of subsonic aerodynamic performance of wings with flaps in combination with a canard or horizontal tail and an application to optimization

    NASA Technical Reports Server (NTRS)

    Carlson, Harry W.; Darden, Christine M.; Mann, Michael J.

    1990-01-01

    Extensive correlations of computer code results with experimental data are employed to illustrate the use of a linearized theory, attached flow method for the estimation and optimization of the longitudinal aerodynamic performance of wing-canard and wing-horizontal tail configurations which may employ simple hinged flap systems. Use of an attached flow method is based on the premise that high levels of aerodynamic efficiency require a flow that is as nearly attached as circumstances permit. The results indicate that linearized theory, attached flow, computer code methods (modified to include estimated attainable leading-edge thrust and an approximate representation of vortex forces) provide a rational basis for the estimation and optimization of aerodynamic performance at subsonic speeds below the drag rise Mach number. Generally, good prediction of aerodynamic performance, as measured by the suction parameter, can be expected for near optimum combinations of canard or horizontal tail incidence and leading- and trailing-edge flap deflections at a given lift coefficient (conditions which tend to produce a predominantly attached flow).

  15. Validation of a pair of computer codes for estimation and optimization of subsonic aerodynamic performance of simple hinged-flap systems for thin swept wings

    NASA Technical Reports Server (NTRS)

    Carlson, Harry W.; Darden, Christine M.

    1988-01-01

    Extensive correlations of computer code results with experimental data are employed to illustrate the use of linearized theory attached flow methods for the estimation and optimization of the aerodynamic performance of simple hinged flap systems. Use of attached flow methods is based on the premise that high levels of aerodynamic efficiency require a flow that is as nearly attached as circumstances permit. A variety of swept wing configurations are considered ranging from fighters to supersonic transports, all with leading- and trailing-edge flaps for enhancement of subsonic aerodynamic efficiency. The results indicate that linearized theory attached flow computer code methods provide a rational basis for the estimation and optimization of flap system aerodynamic performance at subsonic speeds. The analysis also indicates that vortex flap design is not an opposing approach but is closely related to attached flow design concepts. The successful vortex flap design actually suppresses the formation of detached vortices to produce a small vortex which is restricted almost entirely to the leading edge flap itself.

  16. Colon specific CODES based Piroxicam tablet for colon targeting: statistical optimization, in vivo roentgenography and stability assessment.

    PubMed

    Singh, Amit Kumar; Pathak, Kamla

    2015-03-01

    This study was aimed to statistically optimize CODES™ based Piroxicam (PXM) tablet for colon targeting. A 3(2) full factorial design was used for preparation of core tablet that was subsequently coated to get CODES™ based tablet. The experimental design of core tablets comprised of two independent variables: amount of lactulose and PEG 6000, each at three different levels and the dependent variable was %CDR at 12 h. The core tablets were evaluated for pharmacopoeial and non-pharmacopoeial test and coated with optimized levels of Eudragit E100 followed by HPMC K15 and finally with Eudragit S100. The in vitro drug release study of F1-F9 was carried out by change over media method (0.1 N HCl buffer, pH 1.2, phosphate buffer, pH 7.4 and phosphate buffer, pH 6.8 with enzyme β-galactosidase 120 IU) to select optimized formulation F9 that was subjected to in vivo roentgenography. Roentgenography study corroborated the in vitro performance, thus providing the proof of concept. The experimental design was validated by extra check point formulation and Diffuse Reflectance Spectroscopy revealed absence of any interaction between drug and formulation excipients. The shelf life of F9 was deduced as 12 months. Conclusively, colon targeted CODES™ technology based PXM tablets were successfully optimized and its potential of colon targeting was validated by roentgenography. PMID:24266719

  17. Optimization and Parallelization of the Thermal-Hydraulic Sub-channel Code CTF for High-Fidelity Multi-physics Applications

    SciTech Connect

    Salko, Robert K; Schmidt, Rodney; Avramova, Maria N

    2014-01-01

    This paper describes major improvements to the computational infrastructure of the CTF sub-channel code so that full-core sub-channel-resolved simulations can now be performed in much shorter run-times, either in stand-alone mode or as part of coupled-code multi-physics calculations. These improvements support the goals of the Department Of Energy (DOE) Consortium for Advanced Simulations of Light Water (CASL) Energy Innovation Hub to develop high fidelity multi-physics simulation tools for nuclear energy design and analysis. A set of serial code optimizations--including fixing computational inefficiencies, optimizing the numerical approach, and making smarter data storage choices--are first described and shown to reduce both execution time and memory usage by about a factor of ten. Next, a Single Program Multiple Data (SPMD) parallelization strategy targeting distributed memory Multiple Instruction Multiple Data (MIMD) platforms and utilizing domain-decomposition is presented. In this approach, data communication between processors is accomplished by inserting standard MPI calls at strategic points in the code. The domain decomposition approach implemented assigns one MPI process to each fuel assembly, with each domain being represented by its own CTF input file. The creation of CTF input files, both for serial and parallel runs, is also fully automated through use of a pre-processor utility that takes a greatly reduced set of user input over the traditional CTF input file. To run CTF in parallel, two additional libraries are currently needed; MPI, for inter-processor message passing, and the Parallel Extensible Toolkit for Scientific Computation (PETSc), which is leveraged to solve the global pressure matrix in parallel. Results presented include a set of testing and verification calculations and performance tests assessing parallel scaling characteristics up to a full core, sub-channel-resolved model of Watts Bar Unit 1 under hot full-power conditions (193 17x17

  18. Dense Hypervelocity Plasma Jets

    NASA Astrophysics Data System (ADS)

    Witherspoon, F. Douglas; Case, Andrew; Phillips, Michael W.

    2006-10-01

    High velocity dense plasma jets are under continued experimental development for a variety of fusion applications including refueling, disruption mitigation, rotation drive, and magnetized target fusion. The technical goal is to accelerate plasma slugs of density >10^17 cm-3 and total mass >100 micrograms to velocities >200 km/s. The approach utilizes symmetrical injection of very high density plasma into a coaxial EM accelerator having a tailored cross-section geometry to prevent formation of the blow-by instability. Injected plasma is generated by electrothermal capillary discharges using either cylindrical capillaries or a newer toroidal spark gap arrangement that has worked at pressures as low as 3.5 x10-6 Torr in bench tests. Experimental plasma data will be presented for a complete 32 injector accelerator system recently built for driving rotation in the Maryland MCX experiment which utilizes the cylindrical capillaries, and also for a 50 spark gap test unit currently under construction.

  19. The characterization and optimization of NIO1 ion source extraction aperture using a 3D particle-in-cell code

    NASA Astrophysics Data System (ADS)

    Taccogna, F.; Minelli, P.; Cavenago, M.; Veltri, P.; Ippolito, N.

    2016-02-01

    The geometry of a single aperture in the extraction grid plays a relevant role for the optimization of negative ion transport and extraction probability in a hybrid negative ion source. For this reason, a three-dimensional particle-in-cell/Monte Carlo collision model of the extraction region around the single aperture including part of the source and part of the acceleration (up to the extraction grid (EG) middle) regions has been developed for the new aperture design prepared for negative ion optimization 1 source. Results have shown that the dimension of the flat and chamfered parts and the slope of the latter in front of the source region maximize the product of production rate and extraction probability (allowing the best EG field penetration) of surface-produced negative ions. The negative ion density in the plane yz has been reported.

  20. The characterization and optimization of NIO1 ion source extraction aperture using a 3D particle-in-cell code.

    PubMed

    Taccogna, F; Minelli, P; Cavenago, M; Veltri, P; Ippolito, N

    2016-02-01

    The geometry of a single aperture in the extraction grid plays a relevant role for the optimization of negative ion transport and extraction probability in a hybrid negative ion source. For this reason, a three-dimensional particle-in-cell/Monte Carlo collision model of the extraction region around the single aperture including part of the source and part of the acceleration (up to the extraction grid (EG) middle) regions has been developed for the new aperture design prepared for negative ion optimization 1 source. Results have shown that the dimension of the flat and chamfered parts and the slope of the latter in front of the source region maximize the product of production rate and extraction probability (allowing the best EG field penetration) of surface-produced negative ions. The negative ion density in the plane yz has been reported. PMID:26932027

  1. Geometrical Optics of Dense Aerosols

    SciTech Connect

    Hay, Michael J.; Valeo, Ernest J.; Fisch, Nathaniel J.

    2013-04-24

    Assembling a free-standing, sharp-edged slab of homogeneous material that is much denser than gas, but much more rare ed than a solid, is an outstanding technological challenge. The solution may lie in focusing a dense aerosol to assume this geometry. However, whereas the geometrical optics of dilute aerosols is a well-developed fi eld, the dense aerosol limit is mostly unexplored. Yet controlling the geometrical optics of dense aerosols is necessary in preparing such a material slab. Focusing dense aerosols is shown here to be possible, but the nite particle density reduces the eff ective Stokes number of the flow, a critical result for controlled focusing. __________________________________________________

  2. BUMPERII - DESIGN ANALYSIS CODE FOR OPTIMIZING SPACECRAFT SHIELDING AND WALL CONFIGURATION FOR ORBITAL DEBRIS AND METEOROID IMPACTS

    NASA Technical Reports Server (NTRS)

    Hill, S. A.

    1994-01-01

    BUMPERII is a modular program package employing a numerical solution technique to calculate a spacecraft's probability of no penetration (PNP) from man-made orbital debris or meteoroid impacts. The solution equation used to calculate the PNP is based on the Poisson distribution model for similar analysis of smaller craft, but reflects the more rigorous mathematical modeling of spacecraft geometry, orientation, and impact characteristics necessary for treatment of larger structures such as space station components. The technique considers the spacecraft surface in terms of a series of flat plate elements. It divides the threat environment into a number of finite cases, then evaluates each element of each threat. The code allows for impact shielding (shadowing) of one element by another in various configurations over the spacecraft exterior, and also allows for the effects of changing spacecraft flight orientation and attitude. Four main modules comprise the overall BUMPERII package: GEOMETRY, RESPONSE, SHIELD, and CONTOUR. The GEOMETRY module accepts user-generated finite element model (FEM) representations of the spacecraft geometry and creates geometry databases for both meteoroid and debris analysis. The GEOMETRY module expects input to be in either SUPERTAB Universal File Format or PATRAN Neutral File Format. The RESPONSE module creates wall penetration response databases, one for meteoroid analysis and one for debris analysis, for up to 100 unique wall configurations. This module also creates a file containing critical diameter as a function of impact velocity and impact angle for each wall configuration. The SHIELD module calculates the PNP for the modeled structure given exposure time, operating altitude, element ID ranges, and the data from the RESPONSE and GEOMETRY databases. The results appear in a summary file. SHIELD will also determine the effective area of the components and the overall model, and it can produce a data file containing the probability

  3. Ariel's Densely Pitted Surface

    NASA Technical Reports Server (NTRS)

    1986-01-01

    This mosaic of the four highest-resolution images of Ariel represents the most detailed Voyager 2 picture of this satellite of Uranus. The images were taken through the clear filter of Voyager's narrow-angle camera on Jan. 24, 1986, at a distance of about 130,000 kilometers (80,000 miles). Ariel is about 1,200 km (750 mi) in diameter; the resolution here is 2.4 km (1.5 mi). Much of Ariel's surface is densely pitted with craters 5 to 10 km (3 to 6 mi) across. These craters are close to the threshold of detection in this picture. Numerous valleys and fault scarps crisscross the highly pitted terrain. Voyager scientists believe the valleys have formed over down-dropped fault blocks (graben); apparently, extensive faulting has occurred as a result of expansion and stretching of Ariel's crust. The largest fault valleys, near the terminator at right, as well as a smooth region near the center of this image, have been partly filled with deposits that are younger and less heavily cratered than the pitted terrain. Narrow, somewhat sinuous scarps and valleys have been formed, in turn, in these young deposits. It is not yet clear whether these sinuous features have been formed by faulting or by the flow of fluids.

    JPL manages the Voyager project for NASA's Office of Space Science.

  4. Dense Hypervelocity Plasma Jets

    NASA Astrophysics Data System (ADS)

    Case, Andrew; Witherspoon, F. Douglas; Messer, Sarah; Bomgardner, Richard; Phillips, Michael; van Doren, David; Elton, Raymond; Uzun-Kaymak, Ilker

    2007-11-01

    We are developing high velocity dense plasma jets for fusion and HEDP applications. Traditional coaxial plasma accelerators suffer from the blow-by instability which limits the mass accelerated to high velocity. In the current design blow-by is delayed by a combination of electrode shaping and use of a tailored plasma armature created by injection of a high density plasma at a few eV generated by arrays of capillary discharges or sparkgaps. Experimental data will be presented for a complete 32 injector gun system built for driving rotation in the Maryland MCX experiment, including data on penetration of the plasma jet through a magnetic field. We present spectroscopic measurements of plasma velocity, temperature, and density, as well as total momentum measured using a ballistic pendulum. Measurements are in agreement with each other and with time of flight data from photodiodes and a multichannel PMT. Plasma density is above 10^15 cm-3, velocities range up to about 100 km/s. Preliminary results from a quadrature heterodyne HeNe interferometer are consistent with these results.

  5. Multi-scaling of the dense plasma focus

    NASA Astrophysics Data System (ADS)

    Saw, S. H.; Lee, S.

    2015-03-01

    The dense plasma focus is a copious source of multi-radiations with many potential new applications of special interest such as in advanced SXR lithography, materials synthesizing and testing, medical isotopes and imaging. This paper reviews the series of numerical experiments conducted using the Lee model code to obtain the scaling laws of the multi-radiations.

  6. Data-driven input variable selection for rainfall-runoff modeling using binary-coded particle swarm optimization and Extreme Learning Machines

    NASA Astrophysics Data System (ADS)

    Taormina, Riccardo; Chau, Kwok-Wing

    2015-10-01

    Selecting an adequate set of inputs is a critical step for successful data-driven streamflow prediction. In this study, we present a novel approach for Input Variable Selection (IVS) that employs Binary-coded discrete Fully Informed Particle Swarm optimization (BFIPS) and Extreme Learning Machines (ELM) to develop fast and accurate IVS algorithms. A scheme is employed to encode the subset of selected inputs and ELM specifications into the binary particles, which are evolved using single objective and multi-objective BFIPS optimization (MBFIPS). The performances of these ELM-based methods are assessed using the evaluation criteria and the datasets included in the comprehensive IVS evaluation framework proposed by Galelli et al. (2014). From a comparison with 4 major IVS techniques used in their original study it emerges that the proposed methods compare very well in terms of selection accuracy. The best performers were found to be (1) a MBFIPS-ELM algorithm based on the concurrent minimization of an error function and the number of selected inputs, and (2) a BFIPS-ELM algorithm based on the minimization of a variant of the Akaike Information Criterion (AIC). The first technique is arguably the most accurate overall, and is able to reach an almost perfect specification of the optimal input subset for a partially synthetic rainfall-runoff experiment devised for the Kentucky River basin. In addition, MBFIPS-ELM allows for the determination of the relative importance of the selected inputs. On the other hand, the BFIPS-ELM is found to consistently reach high accuracy scores while being considerably faster. By extrapolating the results obtained on the IVS test-bed, it can be concluded that the proposed techniques are particularly suited for rainfall-runoff modeling applications characterized by high nonlinearity in the catchment dynamics.

  7. Concatenated Coding Using Trellis-Coded Modulation

    NASA Technical Reports Server (NTRS)

    Thompson, Michael W.

    1997-01-01

    In the late seventies and early eighties a technique known as Trellis Coded Modulation (TCM) was developed for providing spectrally efficient error correction coding. Instead of adding redundant information in the form of parity bits, redundancy is added at the modulation stage thereby increasing bandwidth efficiency. A digital communications system can be designed to use bandwidth-efficient multilevel/phase modulation such as Amplitude Shift Keying (ASK), Phase Shift Keying (PSK), Differential Phase Shift Keying (DPSK) or Quadrature Amplitude Modulation (QAM). Performance gain can be achieved by increasing the number of signals over the corresponding uncoded system to compensate for the redundancy introduced by the code. A considerable amount of research and development has been devoted toward developing good TCM codes for severely bandlimited applications. More recently, the use of TCM for satellite and deep space communications applications has received increased attention. This report describes the general approach of using a concatenated coding scheme that features TCM and RS coding. Results have indicated that substantial (6-10 dB) performance gains can be achieved with this approach with comparatively little bandwidth expansion. Since all of the bandwidth expansion is due to the RS code we see that TCM based concatenated coding results in roughly 10-50% bandwidth expansion compared to 70-150% expansion for similar concatenated scheme which use convolution code. We stress that combined coding and modulation optimization is important for achieving performance gains while maintaining spectral efficiency.

  8. A highly optimized code for calculating atomic data at neutron star magnetic field strengths using a doubly self-consistent Hartree-Fock-Roothaan method

    NASA Astrophysics Data System (ADS)

    Schimeczek, C.; Engel, D.; Wunner, G.

    2012-07-01

    account the shielding of the core potential for outer electrons by inner electrons, and an optimal finite-element decomposition of each individual longitudinal wave function. These measures largely enhance the convergence properties compared to the previous code, and lead to speed-ups by factors up to two orders of magnitude compared with the implementation of the Hartree-Fock-Roothaan method used by Engel and Wunner in [D. Engel, G. Wunner, Phys. Rev. A 78 (2008) 032515]. New version program summaryProgram title: HFFER II Catalogue identifier: AECC_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECC_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: v 55 130 No. of bytes in distributed program, including test data, etc.: 293 700 Distribution format: tar.gz Programming language: Fortran 95 Computer: Cluster of 1-13 HP Compaq dc5750 Operating system: Linux Has the code been vectorized or parallelized?: Yes, parallelized using MPI directives. RAM: 1 GByte per node Classification: 2.1 External routines: MPI/GFortran, LAPACK, BLAS, FMlib (included in the package) Catalogue identifier of previous version: AECC_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 302 Does the new version supersede the previous version?: Yes Nature of problem: Quantitative modellings of features observed in the X-ray spectra of isolated magnetic neutron stars are hampered by the lack of sufficiently large and accurate databases for atoms and ions up to the last fusion product, iron, at strong magnetic field strengths. Our code is intended to provide a powerful tool for calculating energies and oscillator strengths of medium-Z atoms and ions at neutron star magnetic field strengths with sufficient accuracy in a routine way to create such databases. Solution method: The

  9. Population kinetics in dense plasmas

    SciTech Connect

    Schlanges, M.; Bornath, T.; Prenzel, R.; Kremp, D.

    1996-07-01

    Starting from quantum kinetic equations, rate equations for the number densities of the different atomic states and equations for the energy density are derived which are valid for dense nonideal plasmas. Statistical expressions are presented for the rate coefficients taking into account many-body effects as dynamical screening, lowering of the ionization energy and Pauli-blocking. Based on these generalized expressions, the coefficients of impact ionization, three-body recombination, excitation and deexcitation are calculated for nonideal hydrogen and carbon plasmas. As a result, higher ionization and recombination rates are obtained in the dense plasma region. The influence of the many-body effects on the population kinetics, including density and temperature relaxation, is shown then for a dense hydrogen plasma. {copyright} {ital 1996 American Institute of Physics.}

  10. Dense LU Factorization on Multicore Supercomputer Nodes

    SciTech Connect

    Lifflander, Jonathan; Miller, Phil; Venkataraman, Ramprasad; Arya, Anshu; Jones, Terry R; Kale, Laxmikant V

    2012-01-01

    Dense LU factorization is a prominent benchmark used to rank the performance of supercomputers. Many implementations, including the reference code HPL, use block-cyclic distributions of matrix blocks onto a two-dimensional process grid. The process grid dimensions drive a trade-off between communication and computation and are architecture- and implementation-sensitive. We show how the critical panel factorization steps can be made less communication-bound by overlapping asynchronous collectives for pivot identification and exchange with the computation of rank-k updates. By shifting this trade-off, a modified block-cyclic distribution can beneficially exploit more available parallelism on the critical path, and reduce panel factorization's memory hierarchy contention on now-ubiquitous multi-core architectures. The missed parallelism in traditional block-cyclic distributions arises because active panel factorization, triangular solves, and subsequent broadcasts are spread over single process columns or rows (respectively) of the process grid. Increasing one dimension of the process grid decreases the number of distinct processes in the other dimension. To increase parallelism in both dimensions, periodic 'rotation' is applied to the process grid to recover the row-parallelism lost by a tall process grid. During active panel factorization, rank-1 updates stream through memory with minimal reuse. In a column-major process grid, the performance of this access pattern degrades as too many streaming processors contend for access to memory. A block-cyclic mapping in the more popular row-major order does not encounter this problem, but consequently sacrifices node and network locality in the critical pivoting steps. We introduce 'striding' to vary between the two extremes of row- and column-major process grids. As a test-bed for further mapping experiments, we describe a dense LU implementation that allows a block distribution to be defined as a general function of block

  11. On the Grammar of Code-Switching.

    ERIC Educational Resources Information Center

    Bhatt, Rakesh M.

    1996-01-01

    Explores an Optimality-Theoretic approach to account for observed cross-linguistic patterns of code switching that assumes that code switching strives for well-formedness. Optimization of well-formedness in code switching is shown to follow from (violable) ranked constraints. An argument is advanced that code-switching patterns emerge from…

  12. A look at scalable dense linear algebra libraries

    SciTech Connect

    Dongarra, J.J. |; van de Geijn, R.; Walker, D.W.

    1992-07-01

    We discuss the essential design features of a library of scalable software for performing dense linear algebra computations on distributed memory concurrent computers. The square block scattered decomposition is proposed as a flexible and general-purpose way of decomposing most, if not all, dense matrix problems. An object- oriented interface to the library permits more portable applications to be written, and is easy to learn and use, since details of the parallel implementation are hidden from the user. Experiments on the Intel Touchstone Delta system with a prototype code that uses the square block scattered decomposition to perform LU factorization are presented and analyzed. It was found that the code was both scalable and efficient, performing at about 14 Gflop/s (double precision) for the largest problem considered.

  13. Clinical coding. Code breakers.

    PubMed

    Mathieson, Steve

    2005-02-24

    --The advent of payment by results has seen the role of the clinical coder pushed to the fore in England. --Examinations for a clinical coding qualification began in 1999. In 2004, approximately 200 people took the qualification. --Trusts are attracting people to the role by offering training from scratch or through modern apprenticeships. PMID:15768716

  14. Legacy Code Modernization

    NASA Technical Reports Server (NTRS)

    Hribar, Michelle R.; Frumkin, Michael; Jin, Haoqiang; Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    Over the past decade, high performance computing has evolved rapidly; systems based on commodity microprocessors have been introduced in quick succession from at least seven vendors/families. Porting codes to every new architecture is a difficult problem; in particular, here at NASA, there are many large CFD applications that are very costly to port to new machines by hand. The LCM ("Legacy Code Modernization") Project is the development of an integrated parallelization environment (IPE) which performs the automated mapping of legacy CFD (Fortran) applications to state-of-the-art high performance computers. While most projects to port codes focus on the parallelization of the code, we consider porting to be an iterative process consisting of several steps: 1) code cleanup, 2) serial optimization,3) parallelization, 4) performance monitoring and visualization, 5) intelligent tools for automated tuning using performance prediction and 6) machine specific optimization. The approach for building this parallelization environment is to build the components for each of the steps simultaneously and then integrate them together. The demonstration will exhibit our latest research in building this environment: 1. Parallelizing tools and compiler evaluation. 2. Code cleanup and serial optimization using automated scripts 3. Development of a code generator for performance prediction 4. Automated partitioning 5. Automated insertion of directives. These demonstrations will exhibit the effectiveness of an automated approach for all the steps involved with porting and tuning a legacy code application for a new architecture.

  15. Validation of spatiotemporally dense springtime land surface phenology with intensive and upscale in situ

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Land surface phenology (LSP) developed using temporally and spatially optimized remote sensing data, is particularly promising for use in detailed ecosystem monitoring and modeling efforts. Validating spatiotemporally dense LSP using compatible (intensively collected) in situ phenological data is t...

  16. SU-E-T-590: Optimizing Magnetic Field Strengths with Matlab for An Ion-Optic System in Particle Therapy Consisting of Two Quadrupole Magnets for Subsequent Simulations with the Monte-Carlo Code FLUKA

    SciTech Connect

    Baumann, K; Weber, U; Simeonov, Y; Zink, K

    2015-06-15

    Purpose: Aim of this study was to optimize the magnetic field strengths of two quadrupole magnets in a particle therapy facility in order to obtain a beam quality suitable for spot beam scanning. Methods: The particle transport through an ion-optic system of a particle therapy facility consisting of the beam tube, two quadrupole magnets and a beam monitor system was calculated with the help of Matlab by using matrices that solve the equation of motion of a charged particle in a magnetic field and field-free region, respectively. The magnetic field strengths were optimized in order to obtain a circular and thin beam spot at the iso-center of the therapy facility. These optimized field strengths were subsequently transferred to the Monte-Carlo code FLUKA and the transport of 80 MeV/u C12-ions through this ion-optic system was calculated by using a user-routine to implement magnetic fields. The fluence along the beam-axis and at the iso-center was evaluated. Results: The magnetic field strengths could be optimized by using Matlab and transferred to the Monte-Carlo code FLUKA. The implementation via a user-routine was successful. Analyzing the fluence-pattern along the beam-axis the characteristic focusing and de-focusing effects of the quadrupole magnets could be reproduced. Furthermore the beam spot at the iso-center was circular and significantly thinner compared to an unfocused beam. Conclusion: In this study a Matlab tool was developed to optimize magnetic field strengths for an ion-optic system consisting of two quadrupole magnets as part of a particle therapy facility. These magnetic field strengths could subsequently be transferred to and implemented in the Monte-Carlo code FLUKA to simulate the particle transport through this optimized ion-optic system.

  17. Warm Dense Matter: An Overview

    SciTech Connect

    Kalantar, D H; Lee, R W; Molitoris, J D

    2004-04-21

    This document provides a summary of the ''LLNL Workshop on Extreme States of Materials: Warm Dense Matter to NIF'' which was held on 20, 21, and 22 February 2002 at the Wente Conference Center in Livermore, CA. The warm dense matter regime, the transitional phase space region between cold material and hot plasma, is presently poorly understood. The drive to understand the nature of matter in this regime is sparking scientific activity worldwide. In addition to pure scientific interest, finite temperature dense matter occurs in the regimes of interest to the SSMP (Stockpile Stewardship Materials Program). So that obtaining a better understanding of WDM is important to performing effective experiments at, e.g., NIF, a primary mission of LLNL. At this workshop we examined current experimental and theoretical work performed at, and in conjunction with, LLNL to focus future activities and define our role in this rapidly emerging research area. On the experimental front LLNL plays a leading role in three of the five relevant areas and has the opportunity to become a major player in the other two. Discussion at the workshop indicated that the path forward for the experimental efforts at LLNL were two fold: First, we are doing reasonable baseline work at SPLs, HE, and High Energy Lasers with more effort encouraged. Second, we need to plan effectively for the next evolution in large scale facilities, both laser (NIF) and Light/Beam sources (LCLS/TESLA and GSI) Theoretically, LLNL has major research advantages in areas as diverse as the thermochemical approach to warm dense matter equations of state to first principles molecular dynamics simulations. However, it was clear that there is much work to be done theoretically to understand warm dense matter. Further, there is a need for a close collaboration between the generation of verifiable experimental data that can provide benchmarks of both the experimental techniques and the theoretical capabilities. The conclusion of this

  18. Transonic aerodynamics of dense gases. M.S. Thesis - Virginia Polytechnic Inst. and State Univ., Apr. 1990

    NASA Technical Reports Server (NTRS)

    Morren, Sybil Huang

    1991-01-01

    Transonic flow of dense gases for two-dimensional, steady-state, flow over a NACA 0012 airfoil was predicted analytically. The computer code used to model the dense gas behavior was a modified version of Jameson's FL052 airfoil code. The modifications to the code enabled modeling the dense gas behavior near the saturated vapor curve and critical pressure region where the fundamental derivative, Gamma, is negative. This negative Gamma region is of interest because the nonclassical gas behavior such as formation and propagation of expansion shocks, and the disintegration of inadmissible compression shocks may exist. The results indicated that dense gases with undisturbed thermodynamic states in the negative Gamma region show a significant reduction in the extent of the transonic regime as compared to that predicted by the perfect gas theory. The results support existing theories and predictions of the nonclassical, dense gas behavior from previous investigations.

  19. Boundary Preserving Dense Local Regions.

    PubMed

    Kim, Jaechul; Grauman, Kristen

    2015-05-01

    We propose a dense local region detector to extract features suitable for image matching and object recognition tasks. Whereas traditional local interest operators rely on repeatable structures that often cross object boundaries (e.g., corners, scale-space blobs), our sampling strategy is driven by segmentation, and thus preserves object boundaries and shape. At the same time, whereas existing region-based representations are sensitive to segmentation parameters and object deformations, our novel approach to robustly sample dense sites and determine their connectivity offers better repeatability. In extensive experiments, we find that the proposed region detector provides significantly better repeatability and localization accuracy for object matching compared to an array of existing feature detectors. In addition, we show our regions lead to excellent results on two benchmark tasks that require good feature matching: weakly supervised foreground discovery and nearest neighbor-based object recognition. PMID:26353319

  20. An efficient fully atomistic potential model for dense fluid methane

    NASA Astrophysics Data System (ADS)

    Jiang, Chuntao; Ouyang, Jie; Zhuang, Xin; Wang, Lihua; Li, Wuming

    2016-08-01

    A fully atomistic model aimed to obtain a general purpose model for the dense fluid methane is presented. The new optimized potential for liquid simulation (OPLS) model is a rigid five site model which consists of five fixed point charges and five Lennard-Jones centers. The parameters in the potential model are determined by a fit of the experimental data of dense fluid methane using molecular dynamics simulation. The radial distribution function and the diffusion coefficient are successfully calculated for dense fluid methane at various state points. The simulated results are in good agreement with the available experimental data shown in literature. Moreover, the distribution of mean number hydrogen bonds and the distribution of pair-energy are analyzed, which are obtained from the new model and other five reference potential models. Furthermore, the space-time correlation functions for dense fluid methane are also discussed. All the numerical results demonstrate that the new OPLS model could be well utilized to investigate the dense fluid methane.

  1. Dense, finely, grained composite materials

    DOEpatents

    Dunmead, Stephen D.; Holt, Joseph B.; Kingman, Donald D.; Munir, Zuhair A.

    1990-01-01

    Dense, finely grained composite materials comprising one or more ceramic phase or phase and one or more metallic and/or intermetallic phase or phases are produced by combustion synthesis. Spherical ceramic grains are homogeneously dispersed within the matrix. Methods are provided, which include the step of applying mechanical pressure during or immediately after ignition, by which the microstructures in the resulting composites can be controllably selected.

  2. Dense periodic packings of tori

    NASA Astrophysics Data System (ADS)

    Gabbrielli, Ruggero; Jiao, Yang; Torquato, Salvatore

    2014-02-01

    Dense packings of nonoverlapping bodies in three-dimensional Euclidean space R3 are useful models of the structure of a variety of many-particle systems that arise in the physical and biological sciences. Here we investigate the packing behavior of congruent ring tori in R3, which are multiply connected nonconvex bodies of genus 1, as well as horn and spindle tori. Specifically, we analytically construct a family of dense periodic packings of unlinked tori guided by the organizing principles originally devised for simply connected solid bodies [22 Torquato and Jiao, Phys. Rev. E 86, 011102 (2012), 10.1103/PhysRevE.86.011102]. We find that the horn tori as well as certain spindle and ring tori can achieve a packing density not only higher than that of spheres (i.e., π /√18 =0.7404...) but also higher than the densest known ellipsoid packings (i.e., 0.7707...). In addition, we study dense packings of clusters of pair-linked ring tori (i.e., Hopf links), which can possess much higher densities than corresponding packings consisting of unlinked tori.

  3. Dense, Viscous Brine Behavior in Heterogeneous Porous Medium Systems

    PubMed Central

    Wright, D. Johnson; Pedit, J.A.; Gasda, S.E.; Farthing, M.W.; Murphy, L.L.; Knight, S.R.; Brubaker, G.R.

    2010-01-01

    The behavior of dense, viscous calcium bromide brine solutions used to remediate systems contaminated with dense nonaqueous phase liquids (DNAPLs) is considered in laboratory and field porous medium systems. The density and viscosity of brine solutions are experimentally investigated and functional forms fit over a wide range of mass fractions. A density of 1.7 times, and a corresponding viscosity of 6.3 times, that of water is obtained at a calcium bromide mass fraction of 0.53. A three-dimensional laboratory cell is used to investigate the establishment, persistence, and rate of removal of a stratified dense brine layer in a controlled system. Results from a field-scale experiment performed at the Dover National Test Site are used to investigate the ability to establish and maintain a dense brine layer as a component of a DNAPL recovery strategy, and to recover the brine at sufficiently high mass fractions to support the economical reuse of the brine. The results of both laboratory and field experiments show that a dense brine layer can be established, maintained, and recovered to a significant extent. Regions of unstable density profiles are shown to develop and persist in the field-scale experiment, which we attribute to regions of low hydraulic conductivity. The saturated-unsaturated, variable-density ground-water flow simulation code SUTRA is modified to describe the system of interest, and used to compare simulations to experimental observations and to investigate certain unobserved aspects of these complex systems. The model results show that the standard model formulation is not appropriate for capturing the behavior of sharp density gradients observed during the dense brine experiments. PMID:20444520

  4. Constructing Dense Graphs with Unique Hamiltonian Cycles

    ERIC Educational Resources Information Center

    Lynch, Mark A. M.

    2012-01-01

    It is not difficult to construct dense graphs containing Hamiltonian cycles, but it is difficult to generate dense graphs that are guaranteed to contain a unique Hamiltonian cycle. This article presents an algorithm for generating arbitrarily large simple graphs containing "unique" Hamiltonian cycles. These graphs can be turned into dense graphs…

  5. Optimization of geometry, material and economic parameters of a two-zone subcritical reactor for transmutation of nuclear waste with SERPENT Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Gulik, Volodymyr; Tkaczyk, Alan Henry

    2014-06-01

    An optimization study of a subcritical two-zone homogeneous reactor was carried out, taking into consideration geometry, material, and economic parameters. The advantage of a two-zone subcritical system over a single-zone system is demonstrated. The study investigated the optimal volume ratio for the inner and outer zones of the subcritical reactor, in terms of the neutron-physical parameters as well as fuel cost. Optimal geometrical parameters of the system are suggested for different material compositions.

  6. Probing cold dense nuclear matter.

    PubMed

    Subedi, R; Shneor, R; Monaghan, P; Anderson, B D; Aniol, K; Annand, J; Arrington, J; Benaoum, H; Benmokhtar, F; Boeglin, W; Chen, J-P; Choi, Seonho; Cisbani, E; Craver, B; Frullani, S; Garibaldi, F; Gilad, S; Gilman, R; Glamazdin, O; Hansen, J-O; Higinbotham, D W; Holmstrom, T; Ibrahim, H; Igarashi, R; de Jager, C W; Jans, E; Jiang, X; Kaufman, L J; Kelleher, A; Kolarkar, A; Kumbartzki, G; Lerose, J J; Lindgren, R; Liyanage, N; Margaziotis, D J; Markowitz, P; Marrone, S; Mazouz, M; Meekins, D; Michaels, R; Moffit, B; Perdrisat, C F; Piasetzky, E; Potokar, M; Punjabi, V; Qiang, Y; Reinhold, J; Ron, G; Rosner, G; Saha, A; Sawatzky, B; Shahinyan, A; Sirca, S; Slifer, K; Solvignon, P; Sulkosky, V; Urciuoli, G M; Voutier, E; Watson, J W; Weinstein, L B; Wojtsekhowski, B; Wood, S; Zheng, X-C; Zhu, L

    2008-06-13

    The protons and neutrons in a nucleus can form strongly correlated nucleon pairs. Scattering experiments, in which a proton is knocked out of the nucleus with high-momentum transfer and high missing momentum, show that in carbon-12 the neutron-proton pairs are nearly 20 times as prevalent as proton-proton pairs and, by inference, neutron-neutron pairs. This difference between the types of pairs is due to the nature of the strong force and has implications for understanding cold dense nuclear systems such as neutron stars. PMID:18511658

  7. Probing Cold Dense Nuclear Matter

    SciTech Connect

    Subedi, Ramesh; Shneor, R.; Monaghan, Peter; Anderson, Bryon; Aniol, Konrad; Annand, John; Arrington, John; Benaoum, Hachemi; Benmokhtar, Fatiha; Bertozzi, William; Boeglin, Werner; Chen, Jian-Ping; Choi, Seonho; Cisbani, Evaristo; Craver, Brandon; Frullani, Salvatore; Garibaldi, Franco; Gilad, Shalev; Gilman, Ronald; Glamazdin, Oleksandr; Hansen, Jens-Ole; Higinbotham, Douglas; Holmstrom, Timothy; Ibrahim, Hassan; Igarashi, Ryuichi; De Jager, Cornelis; Jans, Eddy; Jiang, Xiaodong; Kaufman, Lisa; Kelleher, Aidan; Kolarkar, Ameya; Kumbartzki, Gerfried; LeRose, John; Lindgren, Richard; Liyanage, Nilanga; Margaziotis, Demetrius; Markowitz, Pete; Marrone, Stefano; Mazouz, Malek; Meekins, David; Michaels, Robert; Moffit, Bryan; Perdrisat, Charles; Piasetzky, Eliazer; Potokar, Milan; Punjabi, Vina; Qiang, Yi; Reinhold, Joerg; Ron, Guy; Rosner, Guenther; Saha, Arunava; Sawatzky, Bradley; Shahinyan, Albert; Sirca, Simon; Slifer, Karl; Solvignon, Patricia; Sulkosky, Vince; Sulkosky, Vincent; Sulkosky, Vince; Sulkosky, Vincent; Urciuoli, Guido; Voutier, Eric; Watson, John; Weinstein, Lawrence; Wojtsekhowski, Bogdan; Wood, Stephen; Zheng, Xiaochao; Zhu, Lingyan

    2008-06-01

    The protons and neutrons in a nucleus can form strongly correlated nucleon pairs. Scattering experiments, in which a proton is knocked out of the nucleus with high-momentum transfer and high missing momentum, show that in carbon-12 the neutron-proton pairs are nearly 20 times as prevalent as proton-proton pairs and, by inference, neutron-neutron pairs. This difference between the types of pairs is due to the nature of the strong force and has implications for understanding cold dense nuclear systems such as neutron stars.

  8. Magnetism in Dense Quark Matter

    NASA Astrophysics Data System (ADS)

    Ferrer, Efrain J.; de la Incera, Vivian

    We review the mechanisms via which an external magnetic field can affect the ground state of cold and dense quark matter. In the absence of a magnetic field, at asymptotically high densities, cold quark matter is in the Color-Flavor-Locked (CFL) phase of color superconductivity characterized by three scales: the superconducting gap, the gluon Meissner mass, and the baryonic chemical potential. When an applied magnetic field becomes comparable with each of these scales, new phases and/or condensates may emerge. They include the magnetic CFL (MCFL) phase that becomes relevant for fields of the order of the gap scale; the paramagnetic CFL, important when the field is of the order of the Meissner mass, and a spin-one condensate associated to the magnetic moment of the Cooper pairs, significant at fields of the order of the chemical potential. We discuss the equation of state (EoS) of MCFL matter for a large range of field values and consider possible applications of the magnetic effects on dense quark matter to the astrophysics of compact stars.

  9. Inference by replication in densely connected systems

    SciTech Connect

    Neirotti, Juan P.; Saad, David

    2007-10-15

    An efficient Bayesian inference method for problems that can be mapped onto dense graphs is presented. The approach is based on message passing where messages are averaged over a large number of replicated variable systems exposed to the same evidential nodes. An assumption about the symmetry of the solutions is required for carrying out the averages; here we extend the previous derivation based on a replica-symmetric- (RS)-like structure to include a more complex one-step replica-symmetry-breaking-like (1RSB-like) ansatz. To demonstrate the potential of the approach it is employed for studying critical properties of the Ising linear perceptron and for multiuser detection in code division multiple access (CDMA) under different noise models. Results obtained under the RS assumption in the noncritical regime give rise to a highly efficient signal detection algorithm in the context of CDMA; while in the critical regime one observes a first-order transition line that ends in a continuous phase transition point. Finite size effects are also observed. While the 1RSB ansatz is not required for the original problems, it was applied to the CDMA signal detection problem with a more complex noise model that exhibits RSB behavior, resulting in an improvement in performance.

  10. Inference by replication in densely connected systems.

    PubMed

    Neirotti, Juan P; Saad, David

    2007-10-01

    An efficient Bayesian inference method for problems that can be mapped onto dense graphs is presented. The approach is based on message passing where messages are averaged over a large number of replicated variable systems exposed to the same evidential nodes. An assumption about the symmetry of the solutions is required for carrying out the averages; here we extend the previous derivation based on a replica-symmetric- (RS)-like structure to include a more complex one-step replica-symmetry-breaking-like (1RSB-like) ansatz. To demonstrate the potential of the approach it is employed for studying critical properties of the Ising linear perceptron and for multiuser detection in code division multiple access (CDMA) under different noise models. Results obtained under the RS assumption in the noncritical regime give rise to a highly efficient signal detection algorithm in the context of CDMA; while in the critical regime one observes a first-order transition line that ends in a continuous phase transition point. Finite size effects are also observed. While the 1RSB ansatz is not required for the original problems, it was applied to the CDMA signal detection problem with a more complex noise model that exhibits RSB behavior, resulting in an improvement in performance. PMID:17995074