Science.gov

Sample records for optimal dense coding

  1. MHD Code Optimizations and Jets in Dense Gaseous Halos

    NASA Astrophysics Data System (ADS)

    Gaibler, Volker; Vigelius, Matthias; Krause, Martin; Camenzind, Max

    We have further optimized and extended the 3D-MHD-code NIRVANA. The magnetized part runs in parallel, reaching 19 Gflops per SX-6 node, and has a passively advected particle population. In addition, the code is MPI-parallel now - on top of the shared memory parallelization. On a 512^3 grid, we reach 561 Gflops with 32 nodes on the SX-8. Also, we have successfully used FLASH on the Opteron cluster. Scientific results are preliminary so far. We report one computation of highly resolved cocoon turbulence. While we find some similarities to earlier 2D work by us and others, we note a strange reluctancy of cold material to enter the low density cocoon, which has to be investigated further.

  2. Optimal Dense Coding and Swap Operation Between Two Coupled Electronic Spins: Effects of Nuclear Field and Spin-Orbit Interaction

    NASA Astrophysics Data System (ADS)

    Jiang, Li; Zhang, Guo-Feng

    2016-08-01

    The effects of nuclear field and spin-orbit interaction on dense coding and swap operation are studied in detail for both the antiferromagnetic (AFM) and ferromagnetic (FM) coupling cases. The conditions for a valid dense coding and under which swap operation is feasible are given.

  3. Optimized QKD BB84 protocol using quantum dense coding and CNOT gates: feasibility based on probabilistic optical devices

    NASA Astrophysics Data System (ADS)

    Gueddana, Amor; Attia, Moez; Chatta, Rihab

    2014-05-01

    In this work, we simulate a fiber-based Quantum Key Distribution Protocol (QKDP) BB84 working at the telecoms wavelength 1550 nm with taking into consideration an optimized attack strategy. We consider in our work a quantum channel composed by probabilistic Single Photon Source (SPS), single mode optical Fiber and quantum detector with high efficiency. We show the advantages of using the Quantum Dots (QD) embedded in micro-cavity compared to the Heralded Single Photon Sources (HSPS). Second, we show that Eve is always getting some information depending on the mean photon number per pulse of the used SPS and therefore, we propose an optimized version of the QKDP BB84 based on Quantum Dense Coding (QDC) that could be implemented by quantum CNOT gates. We evaluate the success probability of implementing the optimized QKDP BB84 when using nowadays probabilistic quantum optical devices for circuit realization. We use for our modeling an abstract probabilistic model of a CNOT gate based on linear optical components and having a success probability of sqrt (4/27), we take into consideration the best SPSs realizations, namely the QD and the HSPS, generating a single photon per pulse with a success probability of 0.73 and 0.37, respectively. We show that the protocol is totally secure against attacks but could be correctly implemented only with a success probability of few percent.

  4. DENSE MEDIA CYCLONE OPTIMIZATION

    SciTech Connect

    Gerald H. Luttrell

    2002-04-11

    The test data obtained from the Baseline Assessment that compares the performance of the density traces to that of different sizes of coal particles is now complete. The experimental results show that the tracer data can indeed be used to accurately predict HMC performance. The following conclusions were drawn: (i) the tracer curve is slightly sharper than curve for coarsest size fraction of coal (probably due to the greater resolution of the tracer technique), (ii) the Ep increases with decreasing coal particle size, and (iii) the Ep values are not excessively large for the well-maintained HMC circuits. The major problems discovered were associated with improper apex-to-vortex finder ratios and particle hang-up due to media segregation. Only one plant yielded test data that were typical of a fully optimized level of performance.

  5. DENSE MEDIA CYCLONE OPTIMIZATION

    SciTech Connect

    Gerald H. Luttrell

    2002-01-14

    During the past quarter, float-sink analyses were completed for four of seven circuits evaluated in this project. According to the commercial laboratory, the analyses for the remaining three sites will be finished by mid February 2002. In addition, it was necessary to repeat several of the float-sink tests to resolve problems identified during the analysis of the experimental data. In terms of accomplishments, a website is being prepared to distribute project findings and software to the public. This site will include (i) an operators manual for HMC operation and maintenance (already available in hard copy), (ii) an expert system software package for evaluating and optimizing HMC performance (in development), and (iii) a spreadsheet-based process model for plant designers (in development). Several technology transfer activities were also carried out including the publication of project results in proceedings and the training of plant operations via workshops.

  6. Relating quantum discord with the quantum dense coding capacity

    SciTech Connect

    Wang, Xin; Qiu, Liang Li, Song; Zhang, Chi; Ye, Bin

    2015-01-15

    We establish the relations between quantum discord and the quantum dense coding capacity in (n + 1)-particle quantum states. A necessary condition for the vanishing discord monogamy score is given. We also find that the loss of quantum dense coding capacity due to decoherence is bounded below by the sum of quantum discord. When these results are restricted to three-particle quantum states, some complementarity relations are obtained.

  7. Relating quantum discord with the quantum dense coding capacity

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Qiu, Liang; Li, Song; Zhang, Chi; Ye, Bin

    2015-01-01

    We establish the relations between quantum discord and the quantum dense coding capacity in ( n + 1)-particle quantum states. A necessary condition for the vanishing discord monogamy score is given. We also find that the loss of quantum dense coding capacity due to decoherence is bounded below by the sum of quantum discord. When these results are restricted to three-particle quantum states, some complementarity relations are obtained.

  8. Computer codes for dispersion of dense gas

    SciTech Connect

    Weber, A.H.; Watts, J.R.

    1982-02-01

    Two models for describing the behavior of dense gases have been adapted for specific applications at the Savannah River Plant (SRP) and have been programmed on the IBM computer. One of the models has been used to predict the effect of a ruptured H/sub 2/S storage tank at the 400 Area. The other model has been used to simulate the effect of an unignited release of H/sub 2/S from the 400-Area flare tower.

  9. Code Optimization Techniques

    SciTech Connect

    MAGEE,GLEN I.

    2000-08-03

    Computers transfer data in a number of different ways. Whether through a serial port, a parallel port, over a modem, over an ethernet cable, or internally from a hard disk to memory, some data will be lost. To compensate for that loss, numerous error detection and correction algorithms have been developed. One of the most common error correction codes is the Reed-Solomon code, which is a special subset of BCH (Bose-Chaudhuri-Hocquenghem) linear cyclic block codes. In the AURA project, an unmanned aircraft sends the data it collects back to earth so it can be analyzed during flight and possible flight modifications made. To counter possible data corruption during transmission, the data is encoded using a multi-block Reed-Solomon implementation with a possibly shortened final block. In order to maximize the amount of data transmitted, it was necessary to reduce the computation time of a Reed-Solomon encoding to three percent of the processor's time. To achieve such a reduction, many code optimization techniques were employed. This paper outlines the steps taken to reduce the processing time of a Reed-Solomon encoding and the insight into modern optimization techniques gained from the experience.

  10. Parallel sparse and dense information coding streams in the electrosensory midbrain.

    PubMed

    Sproule, Michael K J; Metzen, Michael G; Chacron, Maurice J

    2015-10-21

    Efficient processing of incoming sensory information is critical for an organism's survival. It has been widely observed across systems and species that the representation of sensory information changes across successive brain areas. Indeed, peripheral sensory neurons tend to respond densely to a broad range of sensory stimuli while more central neurons tend to instead respond sparsely to a narrow range of stimuli. Such a transition might be advantageous as sparse neural codes are thought to be metabolically efficient and optimize coding efficiency. Here we investigated whether the neural code transitions from dense to sparse within the midbrain Torus semicircularis (TS) of weakly electric fish. Confirming previous results, we found both dense and sparse coding neurons. However, subsequent histological classification revealed that most dense neurons projected to higher brain areas. Our results thus provide strong evidence against the hypothesis that the neural code transitions from dense to sparse in the electrosensory system. Rather, they support the alternative hypothesis that higher brain areas receive parallel streams of dense and sparse coded information from the electrosensory midbrain. We discuss the implications and possible advantages of such a coding strategy and argue that it is a general feature of sensory processing. PMID:26375927

  11. Parallel sparse and dense information coding streams in the electrosensory midbrain

    PubMed Central

    Sproule, Michael K.J.; Metzen, Michael G.; Chacron, Maurice J.

    2015-01-01

    Efficient processing of incoming sensory information is critical for an organism’s survival. It has been widely observed across systems and species that the representation of sensory information changes across successive brain areas. Indeed, peripheral sensory neurons tend to respond densely to a broad range of sensory stimuli while more central neurons tend to instead respond sparsely to a narrow range of stimuli. Such a transition might be advantageous as sparse neural codes are thought to be metabolically efficient and optimize coding efficiency. Here we investigated whether the neural code transitions from dense to sparse within the midbrain Torus semicircularis (TS) of weakly electric fish. Confirming previous results, we found both dense and sparse coding neurons. However, subsequent histological classification revealed that most dense neurons projected to higher brain areas. Our results thus provide strong evidence against the hypothesis that the neural code transitions from dense to sparse in the electrosensory system. Rather, they support the alternative hypothesis that higher brain areas receive parallel streams of dense and sparse coded information from the electrosensory midbrain. We discuss the implications and possible advantages of such a coding strategy and argue that it is a general feature of sensory processing. PMID:26375927

  12. Controlled Dense Coding Using the Maximal Slice States

    NASA Astrophysics Data System (ADS)

    Liu, Jun; Mo, Zhi-wen; Sun, Shu-qin

    2016-04-01

    In this paper we investigate the controlled dense coding with the maximal slice states. Three schemes are presented. Our schemes employ the maximal slice states as quantum channel, which consists of the tripartite entangled state from the first party(Alice), the second party(Bob), the third party(Cliff). The supervisor(Cliff) can supervises and controls the channel between Alice and Bob via measurement. Through carrying out local von Neumann measurement, controlled-NOT operation and positive operator-valued measure(POVM), and introducing an auxiliary particle, we can obtain the success probability of dense coding. It is shown that the success probability of information transmitted from Alice to Bob is usually less than one. The average amount of information for each scheme is calculated in detail. These results offer deeper insight into quantum dense coding via quantum channels of partially entangled states.

  13. Deterministic dense coding and faithful teleportation with multipartite graph states

    SciTech Connect

    Huang, C.-Y.; Yu, I-C.; Lin, F.-L.; Hsu, L.-Y.

    2009-05-15

    We propose schemes to perform the deterministic dense coding and faithful teleportation with multipartite graph states. We also find the sufficient and necessary condition of a viable graph state for the proposed schemes. That is, for the associated graph, the reduced adjacency matrix of the Tanner-type subgraph between senders and receivers should be invertible.

  14. Comment I on ''Dense coding in entangled states''

    SciTech Connect

    Wojcik, Antoni; Grudka, Andrzej

    2003-07-01

    In this Comment we question the recent analysis of two dense coding protocols presented by Lee, Ahn, and Hwang [Phys. Rev. A 66, 024304 (2002)]. We argue that in the case of two-party communication protocol, there is no reason for using a maximally entangled state of more than two qubits.

  15. Secure N-dimensional simultaneous dense coding and applications

    NASA Astrophysics Data System (ADS)

    Situ, H.; Qiu, D.; Mateus, P.; Paunković, N.

    2015-12-01

    Simultaneous dense coding (SDC) guarantees that Bob and Charlie simultaneously receive their respective information from Alice in their respective processes of dense coding. The idea is to use the so-called locking operation to “lock” the entanglement channels, thus requiring a joint unlocking operation by Bob and Charlie in order to simultaneously obtain the information sent by Alice. We present some new results on SDC: (1) We propose three SDC protocols, which use different N-dimensional entanglement (Bell state, W state and GHZ state). (2) Besides the quantum Fourier transform, two new locking operators are introduced (the double controlled-NOT operator and the SWAP operator). (3) In the case that spatially distant Bob and Charlie have to finalize the protocol by implementing the unlocking operation through communication, we improve our protocol’s fairness, with respect to Bob and Charlie, by implementing the unlocking operation in series of steps. (4) We improve the security of SDC against the intercept-resend attack. (5) We show that SDC can be used to implement a fair contract signing protocol. (6) We also show that the N-dimensional quantum Fourier transform can act as the locking operator in simultaneous teleportation of N-level quantum systems.

  16. Continuous-variable dense coding via a general Gaussian state: Monogamy relation

    NASA Astrophysics Data System (ADS)

    Lee, Jaehak; Ji, Se-Wan; Park, Jiyong; Nha, Hyunchul

    2014-08-01

    We study a continuous-variable dense coding protocol, originally proposed to employ a two-mode squeezed state, using a general two-mode Gaussian state as a quantum channel. We particularly obtain conditions to manifest quantum advantage by beating two well-known single-mode schemes, namely, the squeezed-state scheme (best Gaussian scheme) and the number-state scheme (optimal scheme achieving the Holevo bound). We then extend our study to a multipartite Gaussian state and investigate the monogamy of operational entanglement measured by the communication capacity under the dense coding protocol. We show that this operational entanglement represents a strict monogamy relation, by means of Heisenberg's uncertainty principle among different parties; i.e., the quantum advantage for communication can be possible for only one pair of two-mode systems among many parties.

  17. SWOC: Spectral Wavelength Optimization Code

    NASA Astrophysics Data System (ADS)

    Ruchti, G. R.

    2016-06-01

    SWOC (Spectral Wavelength Optimization Code) determines the wavelength ranges that provide the optimal amount of information to achieve the required science goals for a spectroscopic study. It computes a figure-of-merit for different spectral configurations using a user-defined list of spectral features, and, utilizing a set of flux-calibrated spectra, determines the spectral regions showing the largest differences among the spectra.

  18. Modular optimization code package: MOZAIK

    NASA Astrophysics Data System (ADS)

    Bekar, Kursat B.

    This dissertation addresses the development of a modular optimization code package, MOZAIK, for geometric shape optimization problems in nuclear engineering applications. MOZAIK's first mission, determining the optimal shape of the D2O moderator tank for the current and new beam tube configurations for the Penn State Breazeale Reactor's (PSBR) beam port facility, is used to demonstrate its capabilities and test its performance. MOZAIK was designed as a modular optimization sequence including three primary independent modules: the initializer, the physics and the optimizer, each having a specific task. By using fixed interface blocks among the modules, the code attains its two most important characteristics: generic form and modularity. The benefit of this modular structure is that the contents of the modules can be switched depending on the requirements of accuracy, computational efficiency, or compatibility with the other modules. Oak Ridge National Laboratory's discrete ordinates transport code TORT was selected as the transport solver in the physics module of MOZAIK, and two different optimizers, Min-max and Genetic Algorithms (GA), were implemented in the optimizer module of the code package. A distributed memory parallelism was also applied to MOZAIK via MPI (Message Passing Interface) to execute the physics module concurrently on a number of processors for various states in the same search. Moreover, dynamic scheduling was enabled to enhance load balance among the processors while running MOZAIK's physics module thus improving the parallel speedup and efficiency. In this way, the total computation time consumed by the physics module is reduced by a factor close to M, where M is the number of processors. This capability also encourages the use of MOZAIK for shape optimization problems in nuclear applications because many traditional codes related to radiation transport do not have parallel execution capability. A set of computational models based on the

  19. Study of controlled dense coding with some discrete tripartite and quadripartite states

    NASA Astrophysics Data System (ADS)

    Roy, Sovik; Ghosh, Biplab

    2015-07-01

    The paper presents a detailed study of controlled dense coding scheme for different types of three and four-particle states. It consists of GHZ state, GHZ type states, maximal slice (MS), state, 4-particle GHZ state and W class of states. It is shown that GHZ-type states can be used for controlled dense coding in a probabilistic sense. We have shown relations among parameter of GHZ type state, concurrence of the shared bipartite state by two parties with respect to GHZ type and Charlie's measurement angle θ. The GHZ states as a special case of MS states, depending on parameters, have also been considered here. We have seen that tripartite W state and quadripartite W state cannot be used in controlled dense coding whereas |Wn>ABC states can be used probabilistically. Finally, we have investigated controlled dense coding scheme for tripartite qutrit states.

  20. Optimal Codes for the Burst Erasure Channel

    NASA Astrophysics Data System (ADS)

    Hamkins, J.

    2008-08-01

    We make the simple observation that the erasure burst correction capability of any (n, k) code can be extended to arbitrary lengths above n with the use of a block interleaver, and discuss nuances of this property when channel symbols are over GF(p) and the code is defined over GF(p^J), J > 1. The results imply that maximum distance separable codes (e.g., Reed-Solomon) offer optimal burst erasure protection with linear complexity, and that the optimality does not depend on the length of the code.

  1. TRACKING CODE DEVELOPMENT FOR BEAM DYNAMICS OPTIMIZATION

    SciTech Connect

    Yang, L.

    2011-03-28

    Dynamic aperture (DA) optimization with direct particle tracking is a straight forward approach when the computing power is permitted. It can have various realistic errors included and is more close than theoretical estimations. In this approach, a fast and parallel tracking code could be very helpful. In this presentation, we describe an implementation of storage ring particle tracking code TESLA for beam dynamics optimization. It supports MPI based parallel computing and is robust as DA calculation engine. This code has been used in the NSLS-II dynamics optimizations and obtained promising performance.

  2. Search for optimal distance spectrum convolutional codes

    NASA Technical Reports Server (NTRS)

    Connor, Matthew C.; Perez, Lance C.; Costello, Daniel J., Jr.

    1993-01-01

    In order to communicate reliably and to reduce the required transmitter power, NASA uses coded communication systems on most of their deep space satellites and probes (e.g. Pioneer, Voyager, Galileo, and the TDRSS network). These communication systems use binary convolutional codes. Better codes make the system more reliable and require less transmitter power. However, there are no good construction techniques for convolutional codes. Thus, to find good convolutional codes requires an exhaustive search over the ensemble of all possible codes. In this paper, an efficient convolutional code search algorithm was implemented on an IBM RS6000 Model 580. The combination of algorithm efficiency and computational power enabled us to find, for the first time, the optimal rate 1/2, memory 14, convolutional code.

  3. Optimal Codes for the Burst Erasure Channel

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon

    2010-01-01

    Deep space communications over noisy channels lead to certain packets that are not decodable. These packets leave gaps, or bursts of erasures, in the data stream. Burst erasure correcting codes overcome this problem. These are forward erasure correcting codes that allow one to recover the missing gaps of data. Much of the recent work on this topic concentrated on Low-Density Parity-Check (LDPC) codes. These are more complicated to encode and decode than Single Parity Check (SPC) codes or Reed-Solomon (RS) codes, and so far have not been able to achieve the theoretical limit for burst erasure protection. A block interleaved maximum distance separable (MDS) code (e.g., an SPC or RS code) offers near-optimal burst erasure protection, in the sense that no other scheme of equal total transmission length and code rate could improve the guaranteed correctible burst erasure length by more than one symbol. The optimality does not depend on the length of the code, i.e., a short MDS code block interleaved to a given length would perform as well as a longer MDS code interleaved to the same overall length. As a result, this approach offers lower decoding complexity with better burst erasure protection compared to other recent designs for the burst erasure channel (e.g., LDPC codes). A limitation of the design is its lack of robustness to channels that have impairments other than burst erasures (e.g., additive white Gaussian noise), making its application best suited for correcting data erasures in layers above the physical layer. The efficiency of a burst erasure code is the length of its burst erasure correction capability divided by the theoretical upper limit on this length. The inefficiency is one minus the efficiency. The illustration compares the inefficiency of interleaved RS codes to Quasi-Cyclic (QC) LDPC codes, Euclidean Geometry (EG) LDPC codes, extended Irregular Repeat Accumulate (eIRA) codes, array codes, and random LDPC codes previously proposed for burst erasure

  4. DSP code optimization based on cache

    NASA Astrophysics Data System (ADS)

    Xu, Chengfa; Li, Chengcheng; Tang, Bin

    2013-03-01

    DSP program's running efficiency on board is often lower than which via the software simulation during the program development, which is mainly resulted from the user's improper use and incomplete understanding of the cache-based memory. This paper took the TI TMS320C6455 DSP as an example, analyzed its two-level internal cache, and summarized the methods of code optimization. Processor can achieve its best performance when using these code optimization methods. At last, a specific algorithm application in radar signal processing is proposed. Experiment result shows that these optimization are efficient.

  5. Development and Benchmarking of a Hybrid PIC Code For Dense Plasmas and Fast Ignition

    SciTech Connect

    Witherspoon, F. Douglas; Welch, Dale R.; Thompson, John R.; MacFarlane, Joeseph J.; Phillips, Michael W.; Bruner, Nicki; Mostrom, Chris; Thoma, Carsten; Clark, R. E.; Bogatu, Nick; Kim, Jin-Soo; Galkin, Sergei; Golovkin, Igor E.; Woodruff, P. R.; Wu, Linchun; Messer, Sarah J.

    2014-05-20

    Radiation processes play an important role in the study of both fast ignition and other inertial confinement schemes, such as plasma jet driven magneto-inertial fusion, both in their effect on energy balance, and in generating diagnostic signals. In the latter case, warm and hot dense matter may be produced by the convergence of a plasma shell formed by the merging of an assembly of high Mach number plasma jets. This innovative approach has the potential advantage of creating matter of high energy densities in voluminous amount compared with high power lasers or particle beams. An important application of this technology is as a plasma liner for the flux compression of magnetized plasma to create ultra-high magnetic fields and burning plasmas. HyperV Technologies Corp. has been developing plasma jet accelerator technology in both coaxial and linear railgun geometries to produce plasma jets of sufficient mass, density, and velocity to create such imploding plasma liners. An enabling tool for the development of this technology is the ability to model the plasma dynamics, not only in the accelerators themselves, but also in the resulting magnetized target plasma and within the merging/interacting plasma jets during transport to the target. Welch pioneered numerical modeling of such plasmas (including for fast ignition) using the LSP simulation code. Lsp is an electromagnetic, parallelized, plasma simulation code under development since 1995. It has a number of innovative features making it uniquely suitable for modeling high energy density plasmas including a hybrid fluid model for electrons that allows electrons in dense plasmas to be modeled with a kinetic or fluid treatment as appropriate. In addition to in-house use at Voss Scientific, several groups carrying out research in Fast Ignition (LLNL, SNL, UCSD, AWE (UK), and Imperial College (UK)) also use LSP. A collaborative team consisting of HyperV Technologies Corp., Voss Scientific LLC, FAR-TECH, Inc., Prism

  6. Overcoming a limitation of deterministic dense coding with a nonmaximally entangled initial state

    SciTech Connect

    Bourdon, P. S.; Gerjuoy, E.

    2010-02-15

    Under two-party deterministic dense coding, Alice communicates (perfectly distinguishable) messages to Bob via a qudit from a pair of entangled qudits in pure state |{Psi}>. If |{Psi}> represents a maximally entangled state (i.e., each of its Schmidt coefficients is {radical}(1/d)), then Alice can convey to Bob one of d{sup 2} distinct messages. If |{Psi}> is not maximally entangled, then Ji et al. [Phys. Rev. A 73, 034307 (2006)] have shown that under the original deterministic dense-coding protocol, in which messages are encoded by unitary operations performed on Alice's qudit, it is impossible to encode d{sup 2}-1 messages. Encoding d{sup 2}-2 messages is possible; see, for example, the numerical studies by Mozes et al. [Phys. Rev. A 71, 012311 (2005)]. Answering a question raised by Wu et al. [Phys. Rev. A 73, 042311 (2006)], we show that when |{Psi}> is not maximally entangled, the communications limit of d{sup 2}-2 messages persists even when the requirement that Alice encode by unitary operations on her qudit is weakened to allow encoding by more general quantum operators. We then describe a dense-coding protocol that can overcome this limitation with high probability, assuming the largest Schmidt coefficient of |{Psi}> is sufficiently close to {radical}(1/d). In this protocol, d{sup 2}-2 of the messages are encoded via unitary operations on Alice's qudit, and the final (d{sup 2}-1)-th message is encoded via a non-trace-preserving quantum operation.

  7. Optimizing Extender Code for NCSX Analyses

    SciTech Connect

    M. Richman, S. Ethier, and N. Pomphrey

    2008-01-22

    Extender is a parallel C++ code for calculating the magnetic field in the vacuum region of a stellarator. The code was optimized for speed and augmented with tools to maintain a specialized NetCDF database. Two parallel algorithms were examined. An even-block work-distribution scheme was comparable in performance to a master-slave scheme. Large speedup factors were achieved by representing the plasma surface with a spline rather than Fourier series. The accuracy of this representation and the resulting calculations relied on the density of the spline mesh. The Fortran 90 module db access was written to make it easy to store Extender output in a manageable database. New or updated data can be added to existing databases. A generalized PBS job script handles the generation of a database from scratch

  8. Complete Distributed Hyper-Entangled-Bell-State Analysis and Quantum Super Dense Coding

    NASA Astrophysics Data System (ADS)

    Zheng, Chunhong; Gu, Yongjian; Li, Wendong; Wang, Zhaoming; Zhang, Jiying

    2016-02-01

    We propose a protocol to implement the distributed hyper-entangled-Bell-state analysis (HBSA) for photonic qubits with weak cross-Kerr nonlinearities, QND photon-number-resolving detection, and some linear optical elements. The distinct feature of our scheme is that the BSA for two different degrees of freedom can be implemented deterministically and nondestructively. Based on the present HBSA, we achieve quantum super dense coding with double information capacity, which makes our scheme more significant for long-distance quantum communication.

  9. Effects of quantum noises and noisy quantum operations on entanglement and special dense coding

    SciTech Connect

    Quek, Sylvanus; Li Ziang; Yeo Ye

    2010-02-15

    We show how noncommuting noises could cause a Bell state {chi}{sub 0} to suffer entanglement sudden death (ESD). ESD may similarly occur when a noisy operation acts, if the corresponding Hamiltonian and Lindblad operator do not commute. We study the implications of these in special dense coding S. When noises that cause ESD act, we show that {chi}{sub 0} may lose its capacity for S before ESD occurs. Similarly, {chi}{sub 0} may fail to yield information transfer better than classically possible when the encoding operations are noisy, though entanglement is not destroyed in the process.

  10. Some optimal partial-unit-memory codes. [time-invariant binary convolutional codes

    NASA Technical Reports Server (NTRS)

    Lauer, G. S.

    1979-01-01

    A class of time-invariant binary convolutional codes is defined, called partial-unit-memory codes. These codes are optimal in the sense of having maximum free distance for given values of R, k (the number of encoder inputs), and mu (the number of encoder memory cells). Optimal codes are given for rates R = 1/4, 1/3, 1/2, and 2/3, with mu not greater than 4 and k not greater than mu + 3, whenever such a code is better than previously known codes. An infinite class of optimal partial-unit-memory codes is also constructed based on equidistant block codes.

  11. Scaling Optimization of the SIESTA MHD Code

    NASA Astrophysics Data System (ADS)

    Seal, Sudip; Hirshman, Steven; Perumalla, Kalyan

    2013-10-01

    SIESTA is a parallel three-dimensional plasma equilibrium code capable of resolving magnetic islands at high spatial resolutions for toroidal plasmas. Originally designed to exploit small-scale parallelism, SIESTA has now been scaled to execute efficiently over several thousands of processors P. This scaling improvement was accomplished with minimal intrusion to the execution flow of the original version. First, the efficiency of the iterative solutions was improved by integrating the parallel tridiagonal block solver code BCYCLIC. Krylov-space generation in GMRES was then accelerated using a customized parallel matrix-vector multiplication algorithm. Novel parallel Hessian generation algorithms were integrated and memory access latencies were dramatically reduced through loop nest optimizations and data layout rearrangement. These optimizations sped up equilibria calculations by factors of 30-50. It is possible to compute solutions with granularity N/P near unity on extremely fine radial meshes (N > 1024 points). Grid separation in SIESTA, which manifests itself primarily in the resonant components of the pressure far from rational surfaces, is strongly suppressed by finer meshes. Large problem sizes of up to 300 K simultaneous non-linear coupled equations have been solved on the NERSC supercomputers. Work supported by U.S. DOE under Contract DE-AC05-00OR22725 with UT-Battelle, LLC.

  12. Statistical physics, optimization and source coding

    NASA Astrophysics Data System (ADS)

    Zechhina, Riccardo

    2005-06-01

    The combinatorial problem of satisfying a given set of constraints that depend on N discrete variables is a fundamental one in optimization and coding theory. Even for instances of randomly generated problems, the question ``does there exist an assignment to the variables that satisfies all constraints?'' may become extraordinarily difficult to solve in some range of parameters where a glass phase sets in. We shall provide a brief review of the recent advances in the statistical mechanics approach to these satisfiability problems and show how the analytic results have helped to design a new class of message-passing algorithms -- the survey propagation (SP) algorithms -- that can efficiently solve some combinatorial problems considered intractable. As an application, we discuss how the packing properties of clusters of solutions in randomly generated satisfiability problems can be exploited in the design of simple lossy data compression algorithms.

  13. Optimality principles for the visual code

    NASA Astrophysics Data System (ADS)

    Pitkow, Xaq

    One way to try to make sense of the complexities of our visual system is to hypothesize that evolution has developed nearly optimal solutions to the problems organisms face in the environment. In this thesis, we study two such principles of optimality for the visual code. In the first half of this dissertation, we consider the principle of decorrelation. Influential theories assert that the center-surround receptive fields of retinal neurons remove spatial correlations present in the visual world. It has been proposed that this decorrelation serves to maximize information transmission to the brain by avoiding transfer of redundant information through optic nerve fibers of limited capacity. While these theories successfully account for several aspects of visual perception, the notion that the outputs of the retina are less correlated than its inputs has never been directly tested at the site of the putative information bottleneck, the optic nerve. We presented visual stimuli with naturalistic image correlations to the salamander retina while recording responses of many retinal ganglion cells using a microelectrode array. The output signals of ganglion cells are indeed decorrelated compared to the visual input, but the receptive fields are only partly responsible. Much of the decorrelation is due to the nonlinear processing by neurons rather than the linear receptive fields. This form of decorrelation dramatically limits information transmission. Instead of improving coding efficiency we show that the nonlinearity is well suited to enable a combinatorial code or to signal robust stimulus features. In the second half of this dissertation, we develop an ideal observer model for the task of discriminating between two small stimuli which move along an unknown retinal trajectory induced by fixational eye movements. The ideal observer is provided with the responses of a model retina and guesses the stimulus identity based on the maximum likelihood rule, which involves sums

  14. Performing aggressive code optimization with an ability to rollback changes made by the aggressive optimizations

    DOEpatents

    Gschwind, Michael K

    2013-07-23

    Mechanisms for aggressively optimizing computer code are provided. With these mechanisms, a compiler determines an optimization to apply to a portion of source code and determines if the optimization as applied to the portion of source code will result in unsafe optimized code that introduces a new source of exceptions being generated by the optimized code. In response to a determination that the optimization is an unsafe optimization, the compiler generates an aggressively compiled code version, in which the unsafe optimization is applied, and a conservatively compiled code version in which the unsafe optimization is not applied. The compiler stores both versions and provides them for execution. Mechanisms are provided for switching between these versions during execution in the event of a failure of the aggressively compiled code version. Moreover, predictive mechanisms are provided for predicting whether such a failure is likely.

  15. Optimal source codes for geometrically distributed integer alphabets

    NASA Technical Reports Server (NTRS)

    Gallager, R. G.; Van Voorhis, D. C.

    1975-01-01

    An approach is shown for using the Huffman algorithm indirectly to prove the optimality of a code for an infinite alphabet if an estimate concerning the nature of the code can be made. Attention is given to nonnegative integers with a geometric probability assignment. The particular distribution considered arises in run-length coding and in encoding protocol information in data networks. Questions of redundancy of the optimal code are also investigated.

  16. Optimal protein-folding codes from spin-glass theory.

    PubMed Central

    Goldstein, R A; Luthey-Schulten, Z A; Wolynes, P G

    1992-01-01

    Protein-folding codes embodied in sequence-dependent energy functions can be optimized using spin-glass theory. Optimal folding codes for associative-memory Hamiltonians based on aligned sequences are deduced. A screening method based on these codes correctly recognizes protein structures in the "twilight zone" of sequence identity in the overwhelming majority of cases. Simulated annealing for the optimally encoded Hamiltonian generally leads to qualitatively correct structures. Images PMID:1594594

  17. Robustly optimal rate one-half binary convolutional codes

    NASA Technical Reports Server (NTRS)

    Johannesson, R.

    1975-01-01

    Three optimality criteria for convolutional codes are considered in this correspondence: namely, free distance, minimum distance, and distance profile. Here we report the results of computer searches for rate one-half binary convolutional codes that are 'robustly optimal' in the sense of being optimal for one criterion and optimal or near-optimal for the other two criteria. Comparisons with previously known codes are made. The results of a computer simulation are reported to show the importance of the distance profile to computational performance with sequential decoding.

  18. Sparse coding based dense feature representation model for hyperspectral image classification

    NASA Astrophysics Data System (ADS)

    Oguslu, Ender; Zhou, Guoqing; Zheng, Zezhong; Iftekharuddin, Khan; Li, Jiang

    2015-11-01

    We present a sparse coding based dense feature representation model (a preliminary version of the paper was presented at the SPIE Remote Sensing Conference, Dresden, Germany, 2013) for hyperspectral image (HSI) classification. The proposed method learns a new representation for each pixel in HSI through the following four steps: sub-band construction, dictionary learning, encoding, and feature selection. The new representation usually has a very high dimensionality requiring a large amount of computational resources. We applied the l1/lq regularized multiclass logistic regression technique to reduce the size of the new representation. We integrated the method with a linear support vector machine (SVM) and a composite kernels SVM (CKSVM) to discriminate different types of land cover. We evaluated the proposed algorithm on three well-known HSI datasets and compared our method to four recently developed classification methods: SVM, CKSVM, simultaneous orthogonal matching pursuit, and image fusion and recursive filtering. Experimental results show that the proposed method can achieve better overall and average classification accuracies with a much more compact representation leading to more efficient sparse models for HSI classification.

  19. Optimality Of Variable-Length Codes

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Miller, Warner H.; Rice, Robert F.

    1994-01-01

    Report presents analysis of performances of conceptual Rice universal noiseless coders designed to provide efficient compression of data over wide range of source-data entropies. Includes predictive preprocessor that maps source data into sequence of nonnegative integers and variable-length-coding processor, which adapts to varying entropy of source data by selecting whichever one of number of optional codes yields shortest codeword.

  20. Analysis of the optimality of the standard genetic code.

    PubMed

    Kumar, Balaji; Saini, Supreet

    2016-07-19

    Many theories have been proposed attempting to explain the origin of the genetic code. While strong reasons remain to believe that the genetic code evolved as a frozen accident, at least for the first few amino acids, other theories remain viable. In this work, we test the optimality of the standard genetic code against approximately 17 million genetic codes, and locate 29 which outperform the standard genetic code at the following three criteria: (a) robustness to point mutation; (b) robustness to frameshift mutation; and (c) ability to encode additional information in the coding region. We use a genetic algorithm to generate and score codes from different parts of the associated landscape, which are, as a result, presumably more representative of the entire landscape. Our results show that while the genetic code is sub-optimal for robustness to frameshift mutation and the ability to encode additional information in the coding region, it is very strongly selected for robustness to point mutation. This coupled with the observation that the different performance indicator scores for a particular genetic code are negatively correlated makes the standard genetic code nearly optimal for the three criteria tested in this work. PMID:27327359

  1. Optimizing ATLAS code with different profilers

    NASA Astrophysics Data System (ADS)

    Kama, S.; Seuster, R.; Stewart, G. A.; Vitillo, R. A.

    2014-06-01

    After the current maintenance period, the LHC will provide higher energy collisions with increased luminosity. In order to keep up with these higher rates, ATLAS software needs to speed up substantially. However, ATLAS code is composed of approximately 6M lines, written by many different programmers with different backgrounds, which makes code optimisation a challenge. To help with this effort different profiling tools and techniques are being used. These include well known tools, such as the Valgrind suite and Intel Amplifier; less common tools like Pin, PAPI, and GOoDA; as well as techniques such as library interposing. In this paper we will mainly focus on Pin tools and GOoDA. Pin is a dynamic binary instrumentation tool which can obtain statistics such as call counts, instruction counts and interrogate functions' arguments. It has been used to obtain CLHEP Matrix profiles, operations and vector sizes for linear algebra calculations which has provided the insight necessary to achieve significant performance improvements. Complimenting this, GOoDA, an in-house performance tool built in collaboration with Google, which is based on hardware performance monitoring unit events, is used to identify hot-spots in the code for different types of hardware limitations, such as CPU resources, caches, or memory bandwidth. GOoDA has been used in improvement of the performance of new magnetic field code and identification of potential vectorization targets in several places, such as Runge-Kutta propagation code.

  2. One-shot absolute pattern for dense reconstruction using DeBruijn coding and Windowed Fourier Transform

    NASA Astrophysics Data System (ADS)

    Fernandez, Sergio; Salvi, Joaquim

    2013-03-01

    Shape reconstruction using coded structured light (SL) is considered one of the most reliable techniques to recover object surfaces. Among SL techniques, the achievement of dense acquisition for moving scenarios constitutes an active field of research. A common solution is to project a single one-shot fringe pattern, extracting depth from the phase deviation of the imaged pattern. However, the algorithms employed to unwrap the phase are computationally slow and can fail in the presence of depth discontinuities and occlusions. In this work, a proposal for a new one-shot dense pattern that combines DeBruijn and Windowed Fourier Transform to obtain a dense, absolute, accurate and computationally fast 3D reconstruction is presented and compared with other existing techniques.

  3. Optimization of focality and direction in dense electrode array transcranial direct current stimulation (tDCS)

    NASA Astrophysics Data System (ADS)

    Guler, Seyhmus; Dannhauer, Moritz; Erem, Burak; Macleod, Rob; Tucker, Don; Turovets, Sergei; Luu, Phan; Erdogmus, Deniz; Brooks, Dana H.

    2016-06-01

    Objective. Transcranial direct current stimulation (tDCS) aims to alter brain function non-invasively via electrodes placed on the scalp. Conventional tDCS uses two relatively large patch electrodes to deliver electrical current to the brain region of interest (ROI). Recent studies have shown that using dense arrays containing up to 512 smaller electrodes may increase the precision of targeting ROIs. However, this creates a need for methods to determine effective and safe stimulus patterns as the number of degrees of freedom is much higher with such arrays. Several approaches to this problem have appeared in the literature. In this paper, we describe a new method for calculating optimal electrode stimulus patterns for targeted and directional modulation in dense array tDCS which differs in some important aspects with methods reported to date. Approach. We optimize stimulus pattern of dense arrays with fixed electrode placement to maximize the current density in a particular direction in the ROI. We impose a flexible set of safety constraints on the current power in the brain, individual electrode currents, and total injected current, to protect subject safety. The proposed optimization problem is convex and thus efficiently solved using existing optimization software to find unique and globally optimal electrode stimulus patterns. Main results. Solutions for four anatomical ROIs based on a realistic head model are shown as exemplary results. To illustrate the differences between our approach and previously introduced methods, we compare our method with two of the other leading methods in the literature. We also report on extensive simulations that show the effect of the values chosen for each proposed safety constraint bound on the optimized stimulus patterns. Significance. The proposed optimization approach employs volume based ROIs, easily adapts to different sets of safety constraints, and takes negligible time to compute. An in-depth comparison study gives

  4. Optimization of KINETICS Chemical Computation Code

    NASA Technical Reports Server (NTRS)

    Donastorg, Cristina

    2012-01-01

    NASA JPL has been creating a code in FORTRAN called KINETICS to model the chemistry of planetary atmospheres. Recently there has been an effort to introduce Message Passing Interface (MPI) into the code so as to cut down the run time of the program. There has been some implementation of MPI into KINETICS; however, the code could still be more efficient than it currently is. One way to increase efficiency is to send only certain variables to all the processes when an MPI subroutine is called and to gather only certain variables when the subroutine is finished. Therefore, all the variables that are used in three of the main subroutines needed to be investigated. Because of the sheer amount of code that there is to comb through this task was given as a ten-week project. I have been able to create flowcharts outlining the subroutines, common blocks, and functions used within the three main subroutines. From these flowcharts I created tables outlining the variables used in each block and important information about each. All this information will be used to determine how to run MPI in KINETICS in the most efficient way possible.

  5. Effects of intrinsic decoherence on various correlations and quantum dense coding in a two superconducting charge qubit system

    NASA Astrophysics Data System (ADS)

    Wang, Fei; Maimaitiyiming-Tusun; Parouke-Paerhati; Ahmad-Abliz

    2015-09-01

    The influence of intrinsic decoherence on various correlations and dense coding in a model which consists of two identical superconducting charge qubits coupled by a fixed capacitor is investigated. The results show that, despite the intrinsic decoherence, the correlations as well as the dense coding channel capacity can be effectively increased via the combination of system parameters, i.e., the mutual coupling energy between the two charge qubits is larger than the Josephson energy of the qubit. The bigger the difference between them is, the better the effect is. Project supported by the Project to Develop Outstanding Young Scientific Talents of China (Grant No. 2013711019), the Natural Science Foundation of Xinjiang Province, China (Grant No. 2012211A052), the Foundation for Key Program of Ministry of Education of China (Grant No. 212193), and the Innovative Foundation for Graduate Students Granted by the Key Subjects of Theoretical Physics of Xinjiang Province, China (Grant No. LLWLL201301).

  6. Optimal periodic binary codes of lengths 28 to 64

    NASA Technical Reports Server (NTRS)

    Tyler, S.; Keston, R.

    1980-01-01

    Results from computer searches performed to find repeated binary phase coded waveforms with optimal periodic autocorrelation functions are discussed. The best results for lengths 28 to 64 are given. The code features of major concern are where (1) the peak sidelobe in the autocorrelation function is small and (2) the sum of the squares of the sidelobes in the autocorrelation function is small.

  7. Optimization of Particle-in-Cell Codes on RISC Processors

    NASA Technical Reports Server (NTRS)

    Decyk, Viktor K.; Karmesin, Steve Roy; Boer, Aeint de; Liewer, Paulette C.

    1996-01-01

    General strategies are developed to optimize particle-cell-codes written in Fortran for RISC processors which are commonly used on massively parallel computers. These strategies include data reorganization to improve cache utilization and code reorganization to improve efficiency of arithmetic pipelines.

  8. Optimizing Nuclear Physics Codes on the XT5

    SciTech Connect

    Hartman-Baker, Rebecca J; Nam, Hai Ah

    2011-01-01

    Scientists studying the structure and behavior of the atomic nucleus require immense high-performance computing resources to gain scientific insights. Several nuclear physics codes are capable of scaling to more than 100,000 cores on Oak Ridge National Laboratory's petaflop Cray XT5 system, Jaguar. In this paper, we present our work on optimizing codes in the nuclear physics domain.

  9. The effect of code expanding optimizations on instruction cache design

    NASA Technical Reports Server (NTRS)

    Chen, William Y.; Chang, Pohua P.; Conte, Thomas M.; Hwu, Wen-Mei W.

    1991-01-01

    It is shown that code expanding optimizations have strong and non-intuitive implications on instruction cache design. Three types of code expanding optimizations are studied: instruction placement, function inline expansion, and superscalar optimizations. Overall, instruction placement reduces the miss ratio of small caches. Function inline expansion improves the performance for small cache sizes, but degrades the performance of medium caches. Superscalar optimizations increases the cache size required for a given miss ratio. On the other hand, they also increase the sequentiality of instruction access so that a simple load-forward scheme effectively cancels the negative effects. Overall, it is shown that with load forwarding, the three types of code expanding optimizations jointly improve the performance of small caches and have little effect on large caches.

  10. Optimal Grouping and Matching for Network-Coded Cooperative Communications

    SciTech Connect

    Sharma, S; Shi, Y; Hou, Y T; Kompella, S; Midkiff, S F

    2011-11-01

    Network-coded cooperative communications (NC-CC) is a new advance in wireless networking that exploits network coding (NC) to improve the performance of cooperative communications (CC). However, there remains very limited understanding of this new hybrid technology, particularly at the link layer and above. This paper fills in this gap by studying a network optimization problem that requires joint optimization of session grouping, relay node grouping, and matching of session/relay groups. After showing that this problem is NP-hard, we present a polynomial time heuristic algorithm to this problem. Using simulation results, we show that our algorithm is highly competitive and can produce near-optimal results.

  11. A systematic method of interconnection optimization for dense-array concentrator photovoltaic system.

    PubMed

    Siaw, Fei-Lu; Chong, Kok-Keong

    2013-01-01

    This paper presents a new systematic approach to analyze all possible array configurations in order to determine the most optimal dense-array configuration for concentrator photovoltaic (CPV) systems. The proposed method is fast, simple, reasonably accurate, and very useful as a preliminary study before constructing a dense-array CPV panel. Using measured flux distribution data, each CPV cells' voltage and current values at three critical points which are at short-circuit, open-circuit, and maximum power point are determined. From there, an algorithm groups the cells into basic modules. The next step is I-V curve prediction, to find the maximum output power of each array configuration. As a case study, twenty different I-V predictions are made for a prototype of nonimaging planar concentrator, and the array configuration that yields the highest output power is determined. The result is then verified by assembling and testing of an actual dense-array on the prototype. It was found that the I-V curve closely resembles simulated I-V prediction, and measured maximum output power varies by only 1.34%. PMID:24453823

  12. A Systematic Method of Interconnection Optimization for Dense-Array Concentrator Photovoltaic System

    PubMed Central

    Siaw, Fei-Lu

    2013-01-01

    This paper presents a new systematic approach to analyze all possible array configurations in order to determine the most optimal dense-array configuration for concentrator photovoltaic (CPV) systems. The proposed method is fast, simple, reasonably accurate, and very useful as a preliminary study before constructing a dense-array CPV panel. Using measured flux distribution data, each CPV cells' voltage and current values at three critical points which are at short-circuit, open-circuit, and maximum power point are determined. From there, an algorithm groups the cells into basic modules. The next step is I-V curve prediction, to find the maximum output power of each array configuration. As a case study, twenty different I-V predictions are made for a prototype of nonimaging planar concentrator, and the array configuration that yields the highest output power is determined. The result is then verified by assembling and testing of an actual dense-array on the prototype. It was found that the I-V curve closely resembles simulated I-V prediction, and measured maximum output power varies by only 1.34%. PMID:24453823

  13. Simulation of two- and three-dimensional dense solute plume behavior with the the METROPOL-3 code

    SciTech Connect

    Oostrom, M.; Roberson, K.R.; Leijnse, A.

    1994-07-01

    Contaminant plumes emanating from waste disposal facilities are often denser than the ambient groundwater. These so-called dense plumes sink deeper into phreatic aquifers and may, under certain conditions, become unstable. The behavior of variable density, aqueous-phase contaminant plumes in saturated, homogeneous 2-D and 3-D intermediate-scale aquifer models was investigated with the finite element code METROPOL-3. The numerical results compare, in a quantitative sense, to previously reported laboratory-scale transport experiments. The simulations show that dense plumes are more likely to penetrate deeper into aquifers and eventually become unstable with increasing density differences between the leachate solution and the ambient groundwater, and other important parameters as the saturated hydraulic conductivity of the porous medium, leakage-rate of the contaminant solution, and source width. The significance of unstable behavior decreases with increasing dispersivity values. It was observed that 3-D flow patterns have a stable effect on sense contaminant plume behavior.

  14. Performance optimization of dense-array concentrator photovoltaic system considering effects of circumsolar radiation and slope error.

    PubMed

    Wong, Chee-Woon; Chong, Kok-Keong; Tan, Ming-Hui

    2015-07-27

    This paper presents an approach to optimize the electrical performance of dense-array concentrator photovoltaic system comprised of non-imaging dish concentrator by considering the circumsolar radiation and slope error effects. Based on the simulated flux distribution, a systematic methodology to optimize the layout configuration of solar cells interconnection circuit in dense array concentrator photovoltaic module has been proposed by minimizing the current mismatch caused by non-uniformity of concentrated sunlight. An optimized layout of interconnection solar cells circuit with minimum electrical power loss of 6.5% can be achieved by minimizing the effects of both circumsolar radiation and slope error. PMID:26367685

  15. Fast and Accurate Construction of Ultra-Dense Consensus Genetic Maps Using Evolution Strategy Optimization

    PubMed Central

    Mester, David; Ronin, Yefim; Schnable, Patrick; Aluru, Srinivas; Korol, Abraham

    2015-01-01

    Our aim was to develop a fast and accurate algorithm for constructing consensus genetic maps for chip-based SNP genotyping data with a high proportion of shared markers between mapping populations. Chip-based genotyping of SNP markers allows producing high-density genetic maps with a relatively standardized set of marker loci for different mapping populations. The availability of a standard high-throughput mapping platform simplifies consensus analysis by ignoring unique markers at the stage of consensus mapping thereby reducing mathematical complicity of the problem and in turn analyzing bigger size mapping data using global optimization criteria instead of local ones. Our three-phase analytical scheme includes automatic selection of ~100-300 of the most informative (resolvable by recombination) markers per linkage group, building a stable skeletal marker order for each data set and its verification using jackknife re-sampling, and consensus mapping analysis based on global optimization criterion. A novel Evolution Strategy optimization algorithm with a global optimization criterion presented in this paper is able to generate high quality, ultra-dense consensus maps, with many thousands of markers per genome. This algorithm utilizes "potentially good orders" in the initial solution and in the new mutation procedures that generate trial solutions, enabling to obtain a consensus order in reasonable time. The developed algorithm, tested on a wide range of simulated data and real world data (Arabidopsis), outperformed two tested state-of-the-art algorithms by mapping accuracy and computation time. PMID:25867943

  16. Code optimization for tagged-token data flow machines

    SciTech Connect

    WimBohm, A.P.; Sargeant, J. . Computer Center)

    1989-01-01

    The efficiency of dataflow code generated from a high-level language can be improved dramatically by both conventional and dataflow-specific optimizations. Such techniques are used in implementing the single-assignment language SISAL on the Manchester Dataflow Machine. The quality of code generated for numeric applications can be measured in terms of the ratio of total number of instructions executed to floating point operations: the MIPS/MFLOPS ratio. Relevant features of the general purpose single-assignment language SISAL and the Manchester Dataflow Machine are introduced. After an assessment of the initial SISAL implementation, showing it to be very expensive, a range of optimizations are described.

  17. Code optimization for tagged-token dataflow machines

    SciTech Connect

    Bohm, A.P.W.; Sargeant, J.

    1989-01-01

    The efficiency of dataflow code generated from a high-level language can be improved dramatically by both conventional and dataflow-specific optimizations. Such techniques are used in implementing the single-assignment language SISAL on the Manchester Dataflow Machine. The quality of code generated for numeric applications can be measured in terms of the ratio of total number of instructions executed to floating point operations: the MIPS/MFLOPS ratio. Relevant features of the general purpose single-assignment language SISAL and the Manchester Dataflow are introduced. After an assessment of the initial SISAL implementation, showing it to be very expensive, a range of optimizations are described.

  18. Casting polymer nets to optimize noisy molecular codes

    PubMed Central

    Tlusty, Tsvi

    2008-01-01

    Life relies on the efficient performance of molecular codes, which relate symbols and meanings via error-prone molecular recognition. We describe how optimizing a code to withstand the impact of molecular recognition noise may be understood from the statistics of a two-dimensional network made of polymers. The noisy code is defined by partitioning the space of symbols into regions according to their meanings. The “polymers” are the boundaries between these regions, and their statistics define the cost and the quality of the noisy code. When the parameters that control the cost–quality balance are varied, the polymer network undergoes a transition, where the number of encoded meanings rises discontinuously. Effects of population dynamics on the evolution of molecular codes are discussed. PMID:18550822

  19. Optimized design and research of secondary microprism for dense array concentrating photovoltaic module

    NASA Astrophysics Data System (ADS)

    Yang, Guanghui; Chen, Bingzhen; Liu, Youqiang; Guo, Limin; Yao, Shun; Wang, Zhiyong

    2015-10-01

    As the critical component of concentrating photovoltaic module, secondary concentrators can be effective in increasing the acceptance angle and incident light, as well as improving the energy uniformity of focal spots. This paper presents a design of transmission-type secondary microprism for dense array concentrating photovoltaic module. The 3-D model of this design is established by Solidworks and important parameters such as inclination angle and component height are optimized using Zemax. According to the design and simulation results, several secondary microprisms with different parameters are fabricated and tested in combination with Fresnel lens and multi-junction solar cell. The sun-simulator IV test results show that the combination has the highest output power when secondary microprism height is 5mm and top facet side length is 7mm. Compared with the case without secondary microprism, the output power can improve 11% after the employment of secondary microprisms, indicating the indispensability of secondary microprisms in concentrating photovoltaic module.

  20. Optimal block cosine transform image coding for noisy channels

    NASA Technical Reports Server (NTRS)

    Vaishampayan, V.; Farvardin, N.

    1986-01-01

    The two dimensional block transform coding scheme based on the discrete cosine transform was studied extensively for image coding applications. While this scheme has proven to be efficient in the absence of channel errors, its performance degrades rapidly over noisy channels. A method is presented for the joint source channel coding optimization of a scheme based on the 2-D block cosine transform when the output of the encoder is to be transmitted via a memoryless design of the quantizers used for encoding the transform coefficients. This algorithm produces a set of locally optimum quantizers and the corresponding binary code assignment for the assumed transform coefficient statistics. To determine the optimum bit assignment among the transform coefficients, an algorithm was used based on the steepest descent method, which under certain convexity conditions on the performance of the channel optimized quantizers, yields the optimal bit allocation. Comprehensive simulation results for the performance of this locally optimum system over noisy channels were obtained and appropriate comparisons against a reference system designed for no channel error were rendered.

  1. Modeling Warm Dense Matter Experiments using the 3D ALE-AMR Code and the Move Toward Exascale Computing

    SciTech Connect

    Koniges, A; Eder, E; Liu, W; Barnard, J; Friedman, A; Logan, G; Fisher, A; Masers, N; Bertozzi, A

    2011-11-04

    The Neutralized Drift Compression Experiment II (NDCX II) is an induction accelerator planned for initial commissioning in 2012. The final design calls for a 3 MeV, Li+ ion beam, delivered in a bunch with characteristic pulse duration of 1 ns, and transverse dimension of order 1 mm. The NDCX II will be used in studies of material in the warm dense matter (WDM) regime, and ion beam/hydrodynamic coupling experiments relevant to heavy ion based inertial fusion energy. We discuss recent efforts to adapt the 3D ALE-AMR code to model WDM experiments on NDCX II. The code, which combines Arbitrary Lagrangian Eulerian (ALE) hydrodynamics with Adaptive Mesh Refinement (AMR), has physics models that include ion deposition, radiation hydrodynamics, thermal diffusion, anisotropic material strength with material time history, and advanced models for fragmentation. Experiments at NDCX-II will explore the process of bubble and droplet formation (two-phase expansion) of superheated metal solids using ion beams. Experiments at higher temperatures will explore equation of state and heavy ion fusion beam-to-target energy coupling efficiency. Ion beams allow precise control of local beam energy deposition providing uniform volumetric heating on a timescale shorter than that of hydrodynamic expansion. The ALE-AMR code does not have any export control restrictions and is currently running at the National Energy Research Scientific Computing Center (NERSC) at LBNL and has been shown to scale well to thousands of CPUs. New surface tension models that are being implemented and applied to WDM experiments. Some of the approaches use a diffuse interface surface tension model that is based on the advective Cahn-Hilliard equations, which allows for droplet breakup in divergent velocity fields without the need for imposed perturbations. Other methods require seeding or other methods for droplet breakup. We also briefly discuss the effects of the move to exascale computing and related

  2. A Simple Model of Optimal Population Coding for Sensory Systems

    PubMed Central

    Doi, Eizaburo; Lewicki, Michael S.

    2014-01-01

    A fundamental task of a sensory system is to infer information about the environment. It has long been suggested that an important goal of the first stage of this process is to encode the raw sensory signal efficiently by reducing its redundancy in the neural representation. Some redundancy, however, would be expected because it can provide robustness to noise inherent in the system. Encoding the raw sensory signal itself is also problematic, because it contains distortion and noise. The optimal solution would be constrained further by limited biological resources. Here, we analyze a simple theoretical model that incorporates these key aspects of sensory coding, and apply it to conditions in the retina. The model specifies the optimal way to incorporate redundancy in a population of noisy neurons, while also optimally compensating for sensory distortion and noise. Importantly, it allows an arbitrary input-to-output cell ratio between sensory units (photoreceptors) and encoding units (retinal ganglion cells), providing predictions of retinal codes at different eccentricities. Compared to earlier models based on redundancy reduction, the proposed model conveys more information about the original signal. Interestingly, redundancy reduction can be near-optimal when the number of encoding units is limited, such as in the peripheral retina. We show that there exist multiple, equally-optimal solutions whose receptive field structure and organization vary significantly. Among these, the one which maximizes the spatial locality of the computation, but not the sparsity of either synaptic weights or neural responses, is consistent with known basic properties of retinal receptive fields. The model further predicts that receptive field structure changes less with light adaptation at higher input-to-output cell ratios, such as in the periphery. PMID:25121492

  3. Optimal transform coding in the presence of quantization noise.

    PubMed

    Diamantaras, K I; Strintzis, M G

    1999-01-01

    The optimal linear Karhunen-Loeve transform (KLT) attains the minimum reconstruction error for a fixed number of transform coefficients assuming that these coefficients do not contain noise. In any real coding system, however, the representation of the coefficients using a finite number of bits requires the presence of quantizers. We formulate the optimal linear transform using a data model that incorporates the quantization noise. Our solution does not correspond to an orthogonal transform and in fact, it achieves a smaller mean squared error (MSE) compared to the KLT, in the noisy case. Like the KLT, our solution depends on the statistics of the input signal, but it also depends on the bit-rate used for each coefficient. Especially for images, based on our optimality theory, we propose a simple modification of the discrete cosine transform (DCT). Our coding experiments show a peak signal-to noise ratio (SNR) performance improvement over JPEG of the order of 0.2 dB with an overhead less than 0.01 b/pixel. PMID:18267426

  4. Optimal bounds for parity-oblivious random access codes

    NASA Astrophysics Data System (ADS)

    Chailloux, André; Kerenidis, Iordanis; Kundu, Srijita; Sikora, Jamie

    2016-04-01

    Random access coding is an information task that has been extensively studied and found many applications in quantum information. In this scenario, Alice receives an n-bit string x, and wishes to encode x into a quantum state {ρ }x, such that Bob, when receiving the state {ρ }x, can choose any bit i\\in [n] and recover the input bit x i with high probability. Here we study two variants: parity-oblivious random access codes (RACs), where we impose the cryptographic property that Bob cannot infer any information about the parity of any subset of bits of the input apart from the single bits x i ; and even-parity-oblivious RACs, where Bob cannot infer any information about the parity of any even-size subset of bits of the input. In this paper, we provide the optimal bounds for parity-oblivious quantum RACs and show that they are asymptotically better than the optimal classical ones. Our results provide a large non-contextuality inequality violation and resolve the main open problem in a work of Spekkens et al (2009 Phys. Rev. Lett. 102 010401). Second, we provide the optimal bounds for even-parity-oblivious RACs by proving their equivalence to a non-local game and by providing tight bounds for the success probability of the non-local game via semidefinite programming. In the case of even-parity-oblivious RACs, the cryptographic property holds also in the device independent model.

  5. Efficient sensory cortical coding optimizes pursuit eye movements.

    PubMed

    Liu, Bing; Macellaio, Matthew V; Osborne, Leslie C

    2016-01-01

    In the natural world, the statistics of sensory stimuli fluctuate across a wide range. In theory, the brain could maximize information recovery if sensory neurons adaptively rescale their sensitivity to the current range of inputs. Such adaptive coding has been observed in a variety of systems, but the premise that adaptation optimizes behaviour has not been tested. Here we show that adaptation in cortical sensory neurons maximizes information about visual motion in pursuit eye movements guided by that cortical activity. We find that gain adaptation drives a rapid (<100 ms) recovery of information after shifts in motion variance, because the neurons and behaviour rescale their sensitivity to motion fluctuations. Both neurons and pursuit rapidly adopt a response gain that maximizes motion information and minimizes tracking errors. Thus, efficient sensory coding is not simply an ideal standard but a description of real sensory computation that manifests in improved behavioural performance. PMID:27611214

  6. On optimization of integration properties of biphase coded signals

    NASA Astrophysics Data System (ADS)

    Qiu, Wanzhi; Xiang, Jingcheng

    Within the context of the requirements for agile waveforms with a large compression ratio in biphase coded radars and on the basis of the characteristics of interpulse integration processing of radar signals, the study proposes two sequence optimization criteria which are suitable for radar processing patterns: interpulse waveform agility - pulse compression - FFT, and MTI - pulse compression - noncoherent integration. Applications of these criteria to optimizing sequences of length 127 are carried out. The output peak ratio of mainlobe to sidelobe (RMS) is improved considerably without a weighting network, while the autocorrelation and cross correlation profles of the sequences are very satisfactory. The RMS of coherent integration and noncoherent integration of eight sequences are 34.12 and 28.1 dB, respectively, when the return signals have zero Doppler shift. These values are about 12 and 6 dB higher than the RMS of single signals before integration.

  7. Investigation of Navier-Stokes Code Verification and Design Optimization

    NASA Technical Reports Server (NTRS)

    Vaidyanathan, Rajkumar

    2004-01-01

    With rapid progress made in employing computational techniques for various complex Navier-Stokes fluid flow problems, design optimization problems traditionally based on empirical formulations and experiments are now being addressed with the aid of computational fluid dynamics (CFD). To be able to carry out an effective CFD-based optimization study, it is essential that the uncertainty and appropriate confidence limits of the CFD solutions be quantified over the chosen design space. The present dissertation investigates the issues related to code verification, surrogate model-based optimization and sensitivity evaluation. For Navier-Stokes (NS) CFD code verification a least square extrapolation (LSE) method is assessed. This method projects numerically computed NS solutions from multiple, coarser base grids onto a freer grid and improves solution accuracy by minimizing the residual of the discretized NS equations over the projected grid. In this dissertation, the finite volume (FV) formulation is focused on. The interplay between the xi concepts and the outcome of LSE, and the effects of solution gradients and singularities, nonlinear physics, and coupling of flow variables on the effectiveness of LSE are investigated. A CFD-based design optimization of a single element liquid rocket injector is conducted with surrogate models developed using response surface methodology (RSM) based on CFD solutions. The computational model consists of the NS equations, finite rate chemistry, and the k-6 turbulence closure. With the aid of these surrogate models, sensitivity and trade-off analyses are carried out for the injector design whose geometry (hydrogen flow angle, hydrogen and oxygen flow areas and oxygen post tip thickness) is optimized to attain desirable goals in performance (combustion length) and life/survivability (the maximum temperatures on the oxidizer post tip and injector face and a combustion chamber wall temperature). A preliminary multi-objective optimization

  8. Investigation of Navier-Stokes code verification and design optimization

    NASA Astrophysics Data System (ADS)

    Vaidyanathan, Rajkumar

    With rapid progress made in employing computational techniques for various complex Navier-Stokes fluid flow problems, design optimization problems traditionally based on empirical formulations and experiments are now being addressed with the aid of computational fluid dynamics (CFD). To be able to carry out an effective CFD-based optimization study, it is essential that the uncertainty and appropriate confidence limits of the CFD solutions be quantified over the chosen design space. The present dissertation investigates the issues related to code verification, surrogate model-based optimization and sensitivity evaluation. For Navier-Stokes (NS) CFD code verification a least square extrapolation (LSE) method is assessed. This method projects numerically computed NS solutions from multiple, coarser base grids onto a finer grid and improves solution accuracy by minimizing the residual of the discretized NS equations over the projected grid. In this dissertation, the finite volume (FV) formulation is focused on. The interplay between the concepts and the outcome of LSE, and the effects of solution gradients and singularities, nonlinear physics, and coupling of flow variables on the effectiveness of LSE are investigated. A CFD-based design optimization of a single element liquid rocket injector is conducted with surrogate models developed using response surface methodology (RSM) based on CFD solutions. The computational model consists of the NS equations, finite rate chemistry, and the k-epsilonturbulence closure. With the aid of these surrogate models, sensitivity and trade-off analyses are carried out for the injector design whose geometry (hydrogen flow angle, hydrogen and oxygen flow areas and oxygen post tip thickness) is optimized to attain desirable goals in performance (combustion length) and life/survivability (the maximum temperatures on the oxidizer post tip and injector face and a combustion chamber wall temperature). A preliminary multi

  9. ITER ICRF antenna analysis and optimization using the TOPICA code

    NASA Astrophysics Data System (ADS)

    Milanesio, D.; Maggiora, R.

    2010-02-01

    This paper documents the complete analysis and optimization of the ITER ion cyclotron range of frequency (ICRF) launcher using the TOPICA code, carried out in the frame of EFDA design activities. The possibility to simulate the detailed geometry of an ICRF antenna in front of a realistic plasma description and to obtain the antenna input parameters and the radiated near electric field distribution is of paramount importance to evaluate and predict the overall system performances. Upon starting from a reference geometry, we pursued a detailed electrical optimization of the IC launcher and we came out with a final geometry showing a remarkable increase in terms of power coupled to plasma. The optimization procedure involved the modification of different parts of the antenna, such as the horizontal septa, the coaxial cables, the coax-to-feeder transitions, the feeders, the strap and the grounding. Eventually, the optimized geometry has been the object of a comprehensive analysis, varying the working frequency, the plasma conditions and the poloidal and toroidal phasings between the feeding cables. The performances of the antenna have been appreciated not only in terms of input parameters or power coupled to plasma, but also by means of power spectra and with the evaluation of the RF potentials.

  10. Neutron Activation Analysis PRognosis and Optimization Code System.

    Energy Science and Technology Software Center (ESTSC)

    2004-08-20

    Version 00 NAAPRO predicts the results and main characteristics (detection limits, determination limits, measurement limits and relative precision of the analysis) of neutron activation analysis (instrumental and radiochemical). Gamma-ray dose rates for different points of time after sample irradiation and input count rate of the spectrometry system are also predicted. The code uses standard Windows user interface and extensive graphical tools for the visualization of the spectrometer characteristics (efficiency, response and background) and simulated spectrum.more » Optimization part is not included in the current version of the code. This release is designated NAAPRO, Version 01.beta. The MCNP code was used for generating detector responses. PREPRO-2000 and FCONV programs were used at the preparation of the program nuclear databases. A special program was developed for viewing, editing and updating of the program databases (not included into the present program package). The MCNP, PREPRO-2000 and FCONV software packages are not included in the NAAPRO package.« less

  11. Recent developments in DYNSUB: New models, code optimization and parallelization

    SciTech Connect

    Daeubler, M.; Trost, N.; Jimenez, J.; Sanchez, V.

    2013-07-01

    DYNSUB is a high-fidelity coupled code system consisting of the reactor simulator DYN3D and the sub-channel code SUBCHANFLOW. It describes nuclear reactor core behavior with pin-by-pin resolution for both steady-state and transient scenarios. In the course of the coupled code system's active development, super-homogenization (SPH) and generalized equivalence theory (GET) discontinuity factors may be computed with and employed in DYNSUB to compensate pin level homogenization errors. Because of the largely increased numerical problem size for pin-by-pin simulations, DYNSUB has bene fitted from HPC techniques to improve its numerical performance. DYNSUB's coupling scheme has been structurally revised. Computational bottlenecks have been identified and parallelized for shared memory systems using OpenMP. Comparing the elapsed time for simulating a PWR core with one-eighth symmetry under hot zero power conditions applying the original and the optimized DYNSUB using 8 cores, overall speed up factors greater than 10 have been observed. The corresponding reduction in execution time enables a routine application of DYNSUB to study pin level safety parameters for engineering sized cases in a scientific environment. (authors)

  12. Optimization of Coded Aperture Radioscintigraphy for Sentinel Lymph Node Mapping

    PubMed Central

    Fujii, Hirofumi; Idoine, John D.; Gioux, Sylvain; Accorsi, Roberto; Slochower, David R.; Lanza, Richard C.; Frangioni, John V.

    2011-01-01

    Purpose Radioscintigraphic imaging during sentinel lymph node (SLN) mapping could potentially improve localization; however, parallel-hole collimators have certain limitations. In this study, we explored the use of coded aperture (CA) collimators. Procedures Equations were derived for the six major dependent variables of CA collimators (i.e., masks) as a function of the ten major independent variables, and an optimized mask was fabricated. After validation, dual-modality CA and near-infrared (NIR) fluorescence SLN mapping was performed in pigs. Results Mask optimization required the judicious balance of competing dependent variables, resulting in sensitivity of 0.35%, XY resolution of 2.0 mm, and Z resolution of 4.2 mm at an 11.5 cm FOV. Findings in pigs suggested that NIR fluorescence imaging and CA radioscintigraphy could be complementary, but present difficult technical challenges. Conclusions This study lays the foundation for using CA collimation for SLN mapping, and also exposes several problems that require further investigation. PMID:21567254

  13. Image-Guided Non-Local Dense Matching with Three-Steps Optimization

    NASA Astrophysics Data System (ADS)

    Huang, Xu; Zhang, Yongjun; Yue, Zhaoxi

    2016-06-01

    This paper introduces a new image-guided non-local dense matching algorithm that focuses on how to solve the following problems: 1) mitigating the influence of vertical parallax to the cost computation in stereo pairs; 2) guaranteeing the performance of dense matching in homogeneous intensity regions with significant disparity changes; 3) limiting the inaccurate cost propagated from depth discontinuity regions; 4) guaranteeing that the path between two pixels in the same region is connected; and 5) defining the cost propagation function between the reliable pixel and the unreliable pixel during disparity interpolation. This paper combines the Census histogram and an improved histogram of oriented gradient (HOG) feature together as the cost metrics, which are then aggregated based on a new iterative non-local matching method and the semi-global matching method. Finally, new rules of cost propagation between the valid pixels and the invalid pixels are defined to improve the disparity interpolation results. The results of our experiments using the benchmarks and the Toronto aerial images from the International Society for Photogrammetry and Remote Sensing (ISPRS) show that the proposed new method can outperform most of the current state-of-the-art stereo dense matching methods.

  14. Pressure distribution based optimization of phase-coded acoustical vortices

    SciTech Connect

    Zheng, Haixiang; Gao, Lu; Dai, Yafei; Ma, Qingyu; Zhang, Dong

    2014-02-28

    Based on the acoustic radiation of point source, the physical mechanism of phase-coded acoustical vortices is investigated with formulae derivations of acoustic pressure and vibration velocity. Various factors that affect the optimization of acoustical vortices are analyzed. Numerical simulations of the axial, radial, and circular pressure distributions are performed with different source numbers, frequencies, and axial distances. The results prove that the acoustic pressure of acoustical vortices is linearly proportional to the source number, and lower fluctuations of circular pressure distributions can be produced for more sources. With the increase of source frequency, the acoustic pressure of acoustical vortices increases accordingly with decreased vortex radius. Meanwhile, increased vortex radius with reduced acoustic pressure is also achieved for longer axial distance. With the 6-source experimental system, circular and radial pressure distributions at various frequencies and axial distances have been measured, which have good agreements with the results of numerical simulations. The favorable results of acoustic pressure distributions provide theoretical basis for further studies of acoustical vortices.

  15. Hydrodynamic Optimization Method and Design Code for Stall-Regulated Hydrokinetic Turbine Rotors

    SciTech Connect

    Sale, D.; Jonkman, J.; Musial, W.

    2009-08-01

    This report describes the adaptation of a wind turbine performance code for use in the development of a general use design code and optimization method for stall-regulated horizontal-axis hydrokinetic turbine rotors. This rotor optimization code couples a modern genetic algorithm and blade-element momentum performance code in a user-friendly graphical user interface (GUI) that allows for rapid and intuitive design of optimal stall-regulated rotors. This optimization method calculates the optimal chord, twist, and hydrofoil distributions which maximize the hydrodynamic efficiency and ensure that the rotor produces an ideal power curve and avoids cavitation. Optimizing a rotor for maximum efficiency does not necessarily create a turbine with the lowest cost of energy, but maximizing the efficiency is an excellent criterion to use as a first pass in the design process. To test the capabilities of this optimization method, two conceptual rotors were designed which successfully met the design objectives.

  16. Stability of the genetic code and optimal parameters of amino acids.

    PubMed

    Chechetkin, V R; Lobzin, V V

    2011-01-21

    The standard genetic code is known to be much more efficient in minimizing adverse effects of misreading errors and one-point mutations in comparison with a random code having the same structure, i.e. the same number of codons coding for each particular amino acid. We study the inverse problem, how the code structure affects the optimal physico-chemical parameters of amino acids ensuring the highest stability of the genetic code. It is shown that the choice of two or more amino acids with given properties determines unambiguously all the others. In this sense the code structure determines strictly the optimal parameters of amino acids or the corresponding scales may be derived directly from the genetic code. In the code with the structure of the standard genetic code the resulting values for hydrophobicity obtained in the scheme "leave one out" and in the scheme with fixed maximum and minimum parameters correlate significantly with the natural scale. The comparison of the optimal and natural parameters allows assessing relative impact of physico-chemical and error-minimization factors during evolution of the genetic code. As the resulting optimal scale depends on the choice of amino acids with given parameters, the technique can also be applied to testing various scenarios of the code evolution with increasing number of codified amino acids. Our results indicate the co-evolution of the genetic code and physico-chemical properties of recruited amino acids. PMID:20955716

  17. Optimal Near-Hitless Network Failure Recovery Using Diversity Coding

    ERIC Educational Resources Information Center

    Avci, Serhat Nazim

    2013-01-01

    Link failures in wide area networks are common and cause significant data losses. Mesh-based protection schemes offer high capacity efficiency but they are slow, require complex signaling, and instable. Diversity coding is a proactive coding-based recovery technique which offers near-hitless (sub-ms) restoration with a competitive spare capacity…

  18. Stochastic dynamic programming for reservoir optimal control: Dense discretization and inflow correlation assumption made possible by parallel computing

    NASA Astrophysics Data System (ADS)

    Piccardi, Carlo; Soncini-Sessa, Rodolfo

    1991-05-01

    The solution via dynamic programming (DP) of a reservoir optimal control problem is often computationally prohibitive when the proper description of the inflow process leads to a system model having several state variables and/or when a sufficiently dense state discretization is required to achieve numerical accuracy. Thus, to simplify, the inflow correlation is usually neglected and/or a coarse state discretization is adopted. However, these simplifications may significantly affect the reliability of the solution of the optimization problem. Nowadays, the availability of very powerful computers based on innovative architectures (vector and parallel machines), even in the domain of personal computers (transputer architectures), stimulates the reformulation of the standard dynamic programming algorithm in a form able to exploit these new machine architectures. The reformulated DP algorithm and new machines enable faster and less costly solution of optimization problems involving a system model having two state variables (storage and previous period inflow, then taking into account the inflow correlation) and a number of states (of the order of 104) such as to guarantee a high numerical accuracy.

  19. Efficacy of Code Optimization on Cache-based Processors

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    The current common wisdom in the U.S. is that the powerful, cost-effective supercomputers of tomorrow will be based on commodity (RISC) micro-processors with cache memories. Already, most distributed systems in the world use such hardware as building blocks. This shift away from vector supercomputers and towards cache-based systems has brought about a change in programming paradigm, even when ignoring issues of parallelism. Vector machines require inner-loop independence and regular, non-pathological memory strides (usually this means: non-power-of-two strides) to allow efficient vectorization of array operations. Cache-based systems require spatial and temporal locality of data, so that data once read from main memory and stored in high-speed cache memory is used optimally before being written back to main memory. This means that the most cache-friendly array operations are those that feature zero or unit stride, so that each unit of data read from main memory (a cache line) contains information for the next iteration in the loop. Moreover, loops ought to be 'fat', meaning that as many operations as possible are performed on cache data-provided instruction caches do not overflow and enough registers are available. If unit stride is not possible, for example because of some data dependency, then care must be taken to avoid pathological strides, just ads on vector computers. For cache-based systems the issues are more complex, due to the effects of associativity and of non-unit block (cache line) size. But there is more to the story. Most modern micro-processors are superscalar, which means that they can issue several (arithmetic) instructions per clock cycle, provided that there are enough independent instructions in the loop body. This is another argument for providing fat loop bodies. With these restrictions, it appears fairly straightforward to produce code that will run efficiently on any cache-based system. It can be argued that although some of the important

  20. Optimization of Ambient Noise Cross-Correlation Imaging Across Large Dense Array

    NASA Astrophysics Data System (ADS)

    Sufri, O.; Xie, Y.; Lin, F. C.; Song, W.

    2015-12-01

    Ambient Noise Tomography is currently one of the most studied topics of seismology. It gives possibility of studying physical properties of rocks from the depths of subsurface to the upper mantle depths using recorded noise sources. A network of new seismic sensors, which are capable of recording continuous seismic noise and doing the processing at the same time on-site, could help to assess possible risk of volcanic activity on a volcano and help to understand the changes in physical properties of a fault before and after an earthquake occurs. This new seismic sensor technology could also be used in oil and gas industry to figure out depletion rate of a reservoir and help to improve velocity models for obtaining better seismic reflection cross-sections. Our recent NSF funded project is bringing seismologists, signal processors, and computer scientists together to develop a new ambient noise seismic imaging system which could record continuous seismic noise and process it on-site and send Green's functions and/or tomography images to the network. Such an imaging system requires optimum amount of sensors, sensor communication, and processing of the recorded data. In order to solve these problems, we first started working on the problem of optimum amount of sensors and the communication between these sensors by using small aperture dense network called Sweetwater Array, deployed by Nodal Seismic in 2014. We downloaded ~17 day of continuous data from 2268 one-component stations between March 30-April 16 2015 from IRIS DMC and performed cross-correlation to determine the lag times between station pairs. The lag times were then entered in matrix form. Our goal is to selecting random lag time values in the matrix and assuming all other elements of the matrix either missing or unknown and performing matrix completion technique to find out how close the results from matrix completion technique would be close to the real calculated values. This would give us better idea

  1. Experimental qualification of a code for optimizing gamma irradiation facilities

    NASA Astrophysics Data System (ADS)

    Mosse, D. C.; Leizier, J. J. M.; Keraron, Y.; Lallemant, T. F.; Perdriau, P. D. M.

    Dose computation codes are a prerequisite for the design of gamma irradiation facilities. Code quality is a basic factor in the achievement of sound economic and technical performance by the facility. This paper covers the validation of a code by reference dosimetry experiments. Developed by the "Société Générale pour les Techniques Nouvelles" (SGN), a supplier of irradiation facilities and member of the CEA Group, the code is currently used by that company. (ERHART, KERARON, 1986) Experimental data were obtained under conditions representative of those prevailing in the gamma irradiation of foodstuffs. Irradiation was performed in POSEIDON, a Cobalt 60 cell of ORIS-I. Several Cobalt 60 rods of known activity are arranged in a planar array typical of industrial irradiation facilities. Pallet density is uniform, ranging from 0 (air) to 0.6. Reference dosimetry measurements were performed by the "Laboratoire de Métrologie des Rayonnements Ionisants" (LMRI) of the "Bureau National de Métrologie" (BNM). The procedure is based on the positioning of more than 300 ESR/alanine dosemeters throughout the various target volumes used. The reference quantity was the absorbed dose in water. The code was validated by a comparison of experimental and computed data. It has proved to be an effective tool for the design of facilities meeting the specific requirements applicable to foodstuff irradiation, which are frequently found difficult to meet.

  2. Quantum circuit for optimal eavesdropping in quantum key distribution using phase-time coding

    SciTech Connect

    Kronberg, D. A.; Molotkov, S. N.

    2010-07-15

    A quantum circuit is constructed for optimal eavesdropping on quantum key distribution proto- cols using phase-time coding, and its physical implementation based on linear and nonlinear fiber-optic components is proposed.

  3. Emergence of optimal decoding of population codes through STDP.

    PubMed

    Habenschuss, Stefan; Puhr, Helmut; Maass, Wolfgang

    2013-06-01

    The brain faces the problem of inferring reliable hidden causes from large populations of noisy neurons, for example, the direction of a moving object from spikes in area MT. It is known that a theoretically optimal likelihood decoding could be carried out by simple linear readout neurons if weights of synaptic connections were set to certain values that depend on the tuning functions of sensory neurons. We show here that such theoretically optimal readout weights emerge autonomously through STDP in conjunction with lateral inhibition between readout neurons. In particular, we identify a class of optimal STDP learning rules with homeostatic plasticity, for which the autonomous emergence of optimal readouts can be explained on the basis of a rigorous learning theory. This theory shows that the network motif we consider approximates expectation-maximization for creating internal generative models for hidden causes of high-dimensional spike inputs. Notably, we find that this optimal functionality can be well approximated by a variety of STDP rules beyond those predicted by theory. Furthermore, we show that this learning process is very stable and automatically adjusts weights to changes in the number of readout neurons, the tuning functions of sensory neurons, and the statistics of external stimuli. PMID:23517096

  4. A new algorithm for optimizing the wavelength coverage for spectroscopic studies: Spectral Wavelength Optimization Code (SWOC)

    NASA Astrophysics Data System (ADS)

    Ruchti, G. R.; Feltzing, S.; Lind, K.; Caffau, E.; Korn, A. J.; Schnurr, O.; Hansen, C. J.; Koch, A.; Sbordone, L.; de Jong, R. S.

    2016-09-01

    The past decade and a half has seen the design and execution of several ground-based spectroscopic surveys, both Galactic and Extragalactic. Additionally, new surveys are being designed that extend the boundaries of current surveys. In this context, many important considerations must be done when designing a spectrograph for the future. Among these is the determination of the optimum wavelength coverage. In this work, we present a new code for determining the wavelength ranges that provide the optimal amount of information to achieve the required science goals for a given survey. In its first mode, it utilizes a user-defined list of spectral features to compute a figure-of-merit for different spectral configurations. The second mode utilizes a set of flux-calibrated spectra, determining the spectral regions that show the largest differences among the spectra. Our algorithm is easily adaptable for any set of science requirements and any spectrograph design. We apply the algorithm to several examples, including 4MOST, showing the method yields important design constraints to the wavelength regions.

  5. A new algorithm for optimizing the wavelength coverage for spectroscopic studies: Spectral Wavelength Optimization Code (SWOC)

    NASA Astrophysics Data System (ADS)

    Ruchti, G. R.; Feltzing, S.; Lind, K.; Caffau, E.; Korn, A. J.; Schnurr, O.; Hansen, C. J.; Koch, A.; Sbordone, L.; de Jong, R. S.

    2016-06-01

    The past decade and a half has seen the design and execution of several ground-based spectroscopic surveys, both Galactic and Extra-galactic. Additionally, new surveys are being designed that extend the boundaries of current surveys. In this context, many important considerations must be done when designing a spectrograph for the future. Among these is the determination of the optimum wavelength coverage. In this work, we present a new code for determining the wavelength ranges that provide the optimal amount of information to achieve the required science goals for a given survey. In its first mode, it utilizes a user-defined list of spectral features to compute a figure-of-merit for different spectral configurations. The second mode utilizes a set of flux-calibrated spectra, determining the spectral regions that show the largest differences among the spectra. Our algorithm is easily adaptable for any set of science requirements and any spectrograph design. We apply the algorithm to several examples, including 4MOST, showing the method yields important design constraints to the wavelength regions.

  6. Context-based lossless image compression with optimal codes for discretized Laplacian distributions

    NASA Astrophysics Data System (ADS)

    Giurcaneanu, Ciprian Doru; Tabus, Ioan; Stanciu, Cosmin

    2003-05-01

    Lossless image compression has become an important research topic, especially in relation with the JPEG-LS standard. Recently, the techniques known for designing optimal codes for sources with infinite alphabets have been applied for the quantized Laplacian sources which have probability mass functions with two geometrically decaying tails. Due to the simple parametric model of the source distribution the Huffman iterations are possible to be carried out analytically, using the concept of reduced source, and the final codes are obtained as a sequence of very simple arithmetic operations, avoiding the need to store coding tables. We propose the use of these (optimal) codes in conjunction with context-based prediction, for noiseless compression of images. To reduce further the average code length, we design Escape sequences to be employed when the estimation of the distribution parameter is unreliable. Results on standard test files show improvements in compression ratio when comparing with JPEG-LS.

  7. Wireless image transmission using turbo codes and optimal unequal error protection.

    PubMed

    Thomos, Nikolaos; Boulgouris, Nikolaos V; Strintzis, Michael G

    2005-11-01

    A novel image transmission scheme is proposed for the communication of set partitioning in hierarchical trees image streams over wireless channels. The proposed scheme employs turbo codes and Reed-Solomon codes in order to deal effectively with burst errors. An algorithm for the optimal unequal error protection of the compressed bitstream is also proposed and applied in conjunction with an inherently more efficient technique for product code decoding. The resulting scheme is tested for the transmission of images over wireless channels. Experimental evaluation clearly demonstrates the superiority of the proposed transmission system in comparison to well-known robust coding schemes. PMID:16279187

  8. Product code optimization for determinate state LDPC decoding in robust image transmission.

    PubMed

    Thomos, Nikolaos; Boulgouris, Nikolaos V; Strintzis, Michael G

    2006-08-01

    We propose a novel scheme for error-resilient image transmission. The proposed scheme employs a product coder consisting of low-density parity check (LDPC) codes and Reed-Solomon codes in order to deal effectively with bit errors. The efficiency of the proposed scheme is based on the exploitation of determinate symbols in Tanner graph decoding of LDPC codes and a novel product code optimization technique based on error estimation. Experimental evaluation demonstrates the superiority of the proposed system in comparison to recent state-of-the-art techniques for image transmission. PMID:16900669

  9. Joint optimization of run-length coding, Huffman coding, and quantization table with complete baseline JPEG decoder compatibility.

    PubMed

    Yang, En-hui; Wang, Longji

    2009-01-01

    To maximize rate distortion performance while remaining faithful to the JPEG syntax, the joint optimization of the Huffman tables, quantization step sizes, and DCT indices of a JPEG encoder is investigated. Given Huffman tables and quantization step sizes, an efficient graph-based algorithm is first proposed to find the optimal DCT indices in the form of run-size pairs. Based on this graph-based algorithm, an iterative algorithm is then presented to jointly optimize run-length coding, Huffman coding, and quantization table selection. The proposed iterative algorithm not only results in a compressed bitstream completely compatible with existing JPEG and MPEG decoders, but is also computationally efficient. Furthermore, when tested over standard test images, it achieves the best JPEG compression results, to the extent that its own JPEG compression performance even exceeds the quoted PSNR results of some state-of-the-art wavelet-based image coders such as Shapiro's embedded zerotree wavelet algorithm at the common bit rates under comparison. Both the graph-based algorithm and the iterative algorithm can be applied to application areas such as web image acceleration, digital camera image compression, MPEG frame optimization, and transcoding, etc. PMID:19095519

  10. DENSE MEDIA CYCLONE OPTIMIZATION

    SciTech Connect

    Gerald H. Luttrell

    2002-09-14

    All project activities are now winding down. Follow-up tracer tests were conducted at several of the industrial test sites and analysis of the experimental data is currently underway. All required field work was completed during this quarter. In addition, the heavy medium cyclone simulation and expert system programs are nearly completed and user manuals are being prepared. Administrative activities (e.g., project documents, cost-sharing accounts, etc.) are being reviewed and prepared for final submission to DOE. All project reporting requirements are up to date. All financial expenditures are within approved limits.

  11. DENSE MEDIA CYCLONE OPTIMIZATION

    SciTech Connect

    Gerald H. Luttrell

    2001-09-10

    The project start date delayed by approximately 7 weeks due to contractual difficulties. Although the original start date was December 14, 2000, the Principal Investigator did not receive the Project Authorization Notice (PAN) from the Virginia Tech Office of Sponsored Programs until February 5, 2001. Therefore, the first project task (i. e., Project Planning) did not begin until February 2001. Activities completed as part of this effort included: (i) revision and updating of the Project Work Plan, (ii) preparation of equipment procurement documents for the Virginia Tech Purchasing Office, and (iii) initiation of preliminary site visits to several coal preparation plants to discuss test work with industrial personnel. After a brief (2 month) contractual delay, project activities are now underway. There are currently no contractual issues or technical problems associated with this project. Project work activities are now expected to proceed in accordance with the proposed project schedule.

  12. DENSE MEDIA CYCLONE OPTIMIZATION

    SciTech Connect

    David M. Hyman

    2002-01-14

    All work associated with Task 1 (Baseline Assessment) was successfully completed and preliminary corrections/recommendations were provided back to the management at each test site. Detailed float-sink tests were completed for Site No.1 and are currently underway for Sites No.2-No. 4. Unfortunately, the work associated with sample analyses (Task 4--Sample Analysis) has been delayed because of a backlog of coal samples at the commercial laboratory participating in this project. As a result, a no-cost project time extension may be necessary in order to complete the project. A decision will be made at the end of the next reporting period. Some of the work completed this quarter included (i) development of mass balance routines for data analysis, (ii) formulation of an expert system rule base, (iii) completion of statistical computations and mathematical curve fits for the density tracer test data. In addition, an ''O & M Checklist'' was prepared to provide plant operators with simple operating and maintenance guidelines that must be followed to obtain good HMC performance.

  13. DENSE MEDIA CYCLONE OPTIMIZATION

    SciTech Connect

    Gerald H. Luttrell

    2001-09-10

    The fieldwork associated with Task 1 (Baseline Assessment) was completed this quarter. Detailed cyclone inspections completed at all but one plant during maintenance shifts. Analysis of the test samples is also currently underway in Task 4 (Sample Analysis). A Draft Recommendation was prepared for the management at each test site in Task 2 (Circuit Modification). All required procurements were completed. Density tracers were manufactured and tested for quality control purposes. Special sampling tools were also purchased and/or fabricated for each plant site. The preliminary experimental data show that the partitioning performance for all seven HMC circuits was generally good. This was attributed to well-maintained cyclones and good operating practices. However, the density tracers detected that most circuits suffered from poor control of media cutpoint. These problems were attributed to poor x-ray calibration and improper manual density measurements. These conclusions will be validated after the analyses of the composite samples have been completed.

  14. DENSE MEDIA CYCLONE OPTIMIZATION

    SciTech Connect

    Gerald H. Luttrell

    2003-01-15

    All technical project activities have been successfully completed. This effort included (1) completion of field testing using density tracers, (2) development of a spreadsheet based HMC simulation program, and (3) preparation of a menu-driven expert system for HMC trouble-shooting. The final project report is now being prepared for submission to DOE comment and review. The submission has been delayed due to difficulties in compiling the large base of technical information generated by the project. Technical personnel are now working to complete this report. Effort is being underway to finalize the financial documents necessary to demonstrate that the cost-sharing requirements for the project have been met.

  15. DENSE MEDIA CYCLONE OPTIMIZATION

    SciTech Connect

    Gerald H. Luttrell

    2003-09-09

    All technical project activities have been successfully completed. This effort included (1) completion of field testing using density tracers, (2) development of a spreadsheet based HMC simulation program, and (3) preparation of a menu-driven expert system for HMC trouble-shooting. The final project report is now being prepared for submission to DOE comment and review. The submission has been delayed due to difficulties in compiling the large base of technical information generated by the project. Technical personnel are now working to complete this report. Effort is being underway to finalize the financial documents necessary to demonstrate that the cost-sharing requirements for the project have been met.

  16. Signal-to-noise-optimal scaling of heterogenous population codes.

    PubMed

    Leibold, Christian

    2013-01-01

    Similarity measures for neuronal population responses that are based on scalar products can be little informative if the neurons have different firing statistics. Based on signal-to-noise optimality, this paper derives positive weighting factors for the individual neurons' response rates in a heterogeneous neuronal population. The weights only depend on empirical statistics. If firing follows Poisson statistics, the weights can be interpreted as mutual information per spike. The scaling is shown to improve linear separability and clustering as compared to unscaled inputs. PMID:23984844

  17. Demonstration of Automatically-Generated Adjoint Code for Use in Aerodynamic Shape Optimization

    NASA Technical Reports Server (NTRS)

    Green, Lawrence; Carle, Alan; Fagan, Mike

    1999-01-01

    Gradient-based optimization requires accurate derivatives of the objective function and constraints. These gradients may have previously been obtained by manual differentiation of analysis codes, symbolic manipulators, finite-difference approximations, or existing automatic differentiation (AD) tools such as ADIFOR (Automatic Differentiation in FORTRAN). Each of these methods has certain deficiencies, particularly when applied to complex, coupled analyses with many design variables. Recently, a new AD tool called ADJIFOR (Automatic Adjoint Generation in FORTRAN), based upon ADIFOR, was developed and demonstrated. Whereas ADIFOR implements forward-mode (direct) differentiation throughout an analysis program to obtain exact derivatives via the chain rule of calculus, ADJIFOR implements the reverse-mode counterpart of the chain rule to obtain exact adjoint form derivatives from FORTRAN code. Automatically-generated adjoint versions of the widely-used CFL3D computational fluid dynamics (CFD) code and an algebraic wing grid generation code were obtained with just a few hours processing time using the ADJIFOR tool. The codes were verified for accuracy and were shown to compute the exact gradient of the wing lift-to-drag ratio, with respect to any number of shape parameters, in about the time required for 7 to 20 function evaluations. The codes have now been executed on various computers with typical memory and disk space for problems with up to 129 x 65 x 33 grid points, and for hundreds to thousands of independent variables. These adjoint codes are now used in a gradient-based aerodynamic shape optimization problem for a swept, tapered wing. For each design iteration, the optimization package constructs an approximate, linear optimization problem, based upon the current objective function, constraints, and gradient values. The optimizer subroutines are called within a design loop employing the approximate linear problem until an optimum shape is found, the design loop

  18. A novel multivariate performance optimization method based on sparse coding and hyper-predictor learning.

    PubMed

    Yang, Jiachen; Ding, Zhiyong; Guo, Fei; Wang, Huogen; Hughes, Nick

    2015-11-01

    In this paper, we investigate the problem of optimization of multivariate performance measures, and propose a novel algorithm for it. Different from traditional machine learning methods which optimize simple loss functions to learn prediction function, the problem studied in this paper is how to learn effective hyper-predictor for a tuple of data points, so that a complex loss function corresponding to a multivariate performance measure can be minimized. We propose to present the tuple of data points to a tuple of sparse codes via a dictionary, and then apply a linear function to compare a sparse code against a given candidate class label. To learn the dictionary, sparse codes, and parameter of the linear function, we propose a joint optimization problem. In this problem, the both the reconstruction error and sparsity of sparse code, and the upper bound of the complex loss function are minimized. Moreover, the upper bound of the loss function is approximated by the sparse codes and the linear function parameter. To optimize this problem, we develop an iterative algorithm based on descent gradient methods to learn the sparse codes and hyper-predictor parameter alternately. Experiment results on some benchmark data sets show the advantage of the proposed methods over other state-of-the-art algorithms. PMID:26291045

  19. Dispersion-optimized optical fiber for high-speed long-haul dense wavelength division multiplexing transmission

    NASA Astrophysics Data System (ADS)

    Wu, Jindong; Chen, Liuhua; Li, Qingguo; Wu, Wenwen; Sun, Keyuan; Wu, Xingkun

    2011-07-01

    Four non-zero-dispersion-shifted fibers with almost the same large effective area (Aeff) and optimized dispersion properties are realized by novel index profile designing and modified vapor axial deposition and modified chemical vapor deposition processes. An Aeff of greater than 71 μm2 is obtained for the designed fibers. Three of the developed fibers with positive dispersion are improved by reducing the 1550nm dispersion slope from 0.072ps/nm2/km to 0.063ps/nm2/km or 0.05ps/nm2/km, increasing the 1550nm dispersion from 4.972ps/nm/km to 5.679ps/nm/km or 7.776ps/nm/km, and shifting the zero-dispersion wavelength from 1500nm to 1450nm. One of these fibers is in good agreement with G655D and G.656 fibers simultaneously, and another one with G655E and G.656 fibers; both fibers are beneficial to high-bit long-haul dense wavelength division multiplexing systems over S-, C-, and L-bands. The fourth developed fiber with negative dispersion is also improved by reducing the 1550nm dispersion slope from 0.12ps/nm2/km to 0.085ps/nm2/km, increasing the 1550nm dispersion from -4ps/nm/km to -6.016ps/nm/km, providing facilities for a submarine transmission system. Experimental measurements indicate that the developed fibers all have excellent optical transmission and good macrobending and splice performances.

  20. DOPEX-1D2C: A one-dimensional, two-constraint radiation shield optimization code

    NASA Technical Reports Server (NTRS)

    Lahti, G. P.

    1973-01-01

    A one-dimensional, two-constraint radiation sheild weight optimization procedure and a computer program, DOPEX-1D2C, is described. The DOPEX-1D2C uses the steepest descent method to alter a set of initial (input) thicknesses of a spherical shield configuration to achieve a minimum weight while simultaneously satisfying two dose-rate constraints. The code assumes an exponential dose-shield thickness relation with parameters specified by the user. Code input instruction, a FORTRAN-4 listing, and a sample problem are given. Typical computer time required to optimize a seven-layer shield is less than 1/2 minute on an IBM 7094.

  1. An Optimization Multi-path Inter-Session Network Coding in Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Xia, Zhuo-Qun; Liu, Chao; Zhu, Xue-Han; Liu, Pin-Chao; Xie, Li-Tong

    Wireless sensor networks (wsns) typically provide several paths from a source to a destination, and by using such paths efficiently. This has the potential not only to increase multiplicatively the achieved end-to-end rate, but also to provide robustness against performance fluctuations of any single link in the system. Network coding is a new technique which improves the network performance. This paper we analyze how to using network coding according to the characteristic of multi-path routing in the wsns. As a result, an optimization multi-path inter-session network coding is designed to improve the wsns performance.

  2. On the optimality of code options for a universal noiseless coder

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Rice, Robert F.; Miller, Warner

    1991-01-01

    A universal noiseless coding structure was developed that provides efficient performance over an extremely broad range of source entropy. This is accomplished by adaptively selecting the best of several easily implemented variable length coding algorithms. Custom VLSI coder and decoder modules capable of processing over 20 million samples per second are currently under development. The first of the code options used in this module development is shown to be equivalent to a class of Huffman code under the Humblet condition, other options are shown to be equivalent to the Huffman codes of a modified Laplacian symbol set, at specified symbol entropy values. Simulation results are obtained on actual aerial imagery, and they confirm the optimality of the scheme. On sources having Gaussian or Poisson distributions, coder performance is also projected through analysis and simulation.

  3. CodHonEditor: Spreadsheets for Codon Optimization and Editing of Protein Coding Sequences.

    PubMed

    Takai, Kazuyuki

    2016-05-01

    Gene synthesis is getting more important with the growing availability of low-cost commercial services. The coding sequences are often "optimized" as for the relative synonymous codon usage (RSCU) before synthesis, which is generally included in the commercial services. However, the codon optimization processes are different among different providers and are often hidden from the users. Here, the d'Hondt method, which is widely adopted as a method for determining the number of seats for each party in proportional-representation public elections, is applied to RSCU fitting. This allowed me to make a set of electronic spreadsheets for manual design of protein coding sequences for expression in Escherichia coli, with which users can see the process of codon optimization and can manually edit the codons after the automatic optimization. The spreadsheets may also be useful for molecular biology education. PMID:27002987

  4. Optimal Multicarrier Phase-Coded Waveform Design for Detection of Extended Targets

    SciTech Connect

    Sen, Satyabrata; Glover, Charles Wayne

    2013-01-01

    We design a parametric multicarrier phase-coded (MCPC) waveform that achieves the optimal performance in detecting an extended target in the presence of signal-dependent interference. Traditional waveform design techniques provide only the optimal energy spectral density of the transmit waveform and suffer a performance loss in the synthesis process of the time-domain signal. Therefore, we opt for directly designing an MCPC waveform in terms of its time-frequency codes to obtain the optimal detection performance. First, we describe the modeling assumptions considering an extended target buried within the signal-dependent clutter with known power spectral density, and deduce the performance characteristics of the optimal detector. Then, considering an MCPC signal transmission, we express the detection characteristics in terms of the phase-codes of the MCPC waveform and propose to optimally design the MCPC signal by maximizing the detection probability. Our numerical results demonstrate that the designed MCPC signal attains the optimal detection performance and requires a lesser computational time than the other parametric waveform design approach.

  5. A coded aperture imaging system optimized for hard X-ray and gamma ray astronomy

    NASA Technical Reports Server (NTRS)

    Gehrels, N.; Cline, T. L.; Huters, A. F.; Leventhal, M.; Maccallum, C. J.; Reber, J. D.; Stang, P. D.; Teegarden, B. J.; Tueller, J.

    1985-01-01

    A coded aperture imaging system was designed for the Gamma-Ray imaging spectrometer (GRIS). The system is optimized for imaging 511 keV positron-annihilation photons. For a galactic center 511-keV source strength of 0.001 sq/s, the source location accuracy is expected to be + or - 0.2 deg.

  6. Liner Optimization Studies Using the Ducted Fan Noise Prediction Code TBIEM3D

    NASA Technical Reports Server (NTRS)

    Dunn, M. H.; Farassat, F.

    1998-01-01

    In this paper we demonstrate the usefulness of the ducted fan noise prediction code TBIEM3D as a liner optimization design tool. Boundary conditions on the interior duct wall allow for hard walls or a locally reacting liner with axially segmented, circumferentially uniform impedance. Two liner optimization studies are considered in which farfield noise attenuation due to the presence of a liner is maximized by adjusting the liner impedance. In the first example, the dependence of optimal liner impedance on frequency and liner length is examined. Results show that both the optimal impedance and attenuation levels are significantly influenced by liner length and frequency. In the second example, TBIEM3D is used to compare radiated sound pressure levels between optimal and non-optimal liner cases at conditions designed to simulate take-off. It is shown that significant noise reduction is achieved for most of the sound field by selecting the optimal or near optimal liner impedance. Our results also indicate that there is relatively large region of the impedance plane over which optimal or near optimal liner behavior is attainable. This is an important conclusion for the designer since there are variations in liner characteristics due to manufacturing imprecisions.

  7. A wavelet-based neural model to optimize and read out a temporal population code

    PubMed Central

    Luvizotto, Andre; Rennó-Costa, César; Verschure, Paul F. M. J.

    2012-01-01

    It has been proposed that the dense excitatory local connectivity of the neo-cortex plays a specific role in the transformation of spatial stimulus information into a temporal representation or a temporal population code (TPC). TPC provides for a rapid, robust, and high-capacity encoding of salient stimulus features with respect to position, rotation, and distortion. The TPC hypothesis gives a functional interpretation to a core feature of the cortical anatomy: its dense local and sparse long-range connectivity. Thus far, the question of how the TPC encoding can be decoded in downstream areas has not been addressed. Here, we present a neural circuit that decodes the spectral properties of the TPC using a biologically plausible implementation of a Haar transform. We perform a systematic investigation of our model in a recognition task using a standardized stimulus set. We consider alternative implementations using either regular spiking or bursting neurons and a range of spectral bands. Our results show that our wavelet readout circuit provides for the robust decoding of the TPC and further compresses the code without loosing speed or quality of decoding. We show that in the TPC signal the relevant stimulus information is present in the frequencies around 100 Hz. Our results show that the TPC is constructed around a small number of coding components that can be well decoded by wavelet coefficients in a neuronal implementation. The solution to the TPC decoding problem proposed here suggests that cortical processing streams might well consist of sequential operations where spatio-temporal transformations at lower levels forming a compact stimulus encoding using TPC that are subsequently decoded back to a spatial representation using wavelet transforms. In addition, the results presented here show that different properties of the stimulus might be transmitted to further processing stages using different frequency components that are captured by appropriately tuned

  8. Multidimensional optimization of fusion reactors using heterogenous codes and engineering software

    NASA Astrophysics Data System (ADS)

    Hartwig, Zachary; Olynyk, Geoffrey; Whyte, Dennis

    2012-10-01

    Magnetic confinement fusion reactors are tightly coupled systems. The parameters under a designer's control, such as magnetic field, wall temperature, and blanket thickness, simultaneously affect the behavior, performance, and components of the reactor, leading to complex tradeoffs and design optimizations. In addition, the engineering analyses require non-trivial, self-consistent inputs, such as reactor geometry, to ensure high fidelity between the various physics and engineering design codes. We present a framework for analysis and multidimensional optimization of fusion reactor systems based on the coupling of heterogeneous codes and engineering software. While this approach is widely used in industry, most code-coupling efforts in fusion have been focused on plasma and edge physics. Instead, we use a simplified plasma model to concentrate on how fusion neutrons and heat transfer affect the design of the first wall, breeding blanket, and magnet systems. The framework combines solid modeling, neutronics, and engineering multiphysics codes and software, linked across Windows and Linux clusters. Initial results for optimizing the design of a compact, high-field tokamak reactor based on high-temperature demountable superconducting coils and a liquid blanket are presented.

  9. Optimal power allocation and joint source-channel coding for wireless DS-CDMA visual sensor networks

    NASA Astrophysics Data System (ADS)

    Pandremmenou, Katerina; Kondi, Lisimachos P.; Parsopoulos, Konstantinos E.

    2011-01-01

    In this paper, we propose a scheme for the optimal allocation of power, source coding rate, and channel coding rate for each of the nodes of a wireless Direct Sequence Code Division Multiple Access (DS-CDMA) visual sensor network. The optimization is quality-driven, i.e. the received quality of the video that is transmitted by the nodes is optimized. The scheme takes into account the fact that the sensor nodes may be imaging scenes with varying levels of motion. Nodes that image low-motion scenes will require a lower source coding rate, so they will be able to allocate a greater portion of the total available bit rate to channel coding. Stronger channel coding will mean that such nodes will be able to transmit at lower power. This will both increase battery life and reduce interference to other nodes. Two optimization criteria are considered. One that minimizes the average video distortion of the nodes and one that minimizes the maximum distortion among the nodes. The transmission powers are allowed to take continuous values, whereas the source and channel coding rates can assume only discrete values. Thus, the resulting optimization problem lies in the field of mixed-integer optimization tasks and is solved using Particle Swarm Optimization. Our experimental results show the importance of considering the characteristics of the video sequences when determining the transmission power, source coding rate and channel coding rate for the nodes of the visual sensor network.

  10. Final Report A Multi-Language Environment For Programmable Code Optimization and Empirical Tuning

    SciTech Connect

    Yi, Qing; Whaley, Richard Clint; Qasem, Apan; Quinlan, Daniel

    2013-11-23

    This report summarizes our effort and results of building an integrated optimization environment to effectively combine the programmable control and the empirical tuning of source-to-source compiler optimizations within the framework of multiple existing languages, specifically C, C++, and Fortran. The environment contains two main components: the ROSE analysis engine, which is based on the ROSE C/C++/Fortran2003 source-to-source compiler developed by Co-PI Dr.Quinlan et. al at DOE/LLNL, and the POET transformation engine, which is based on an interpreted program transformation language developed by Dr. Yi at University of Texas at San Antonio (UTSA). The ROSE analysis engine performs advanced compiler analysis, identifies profitable code transformations, and then produces output in POET, a language designed to provide programmable control of compiler optimizations to application developers and to support the parameterization of architecture-sensitive optimizations so that their configurations can be empirically tuned later. This POET output can then be ported to different machines together with the user application, where a POET-based search engine empirically reconfigures the parameterized optimizations until satisfactory performance is found. Computational specialists can write POET scripts to directly control the optimization of their code. Application developers can interact with ROSE to obtain optimization feedback as well as provide domain-specific knowledge and high-level optimization strategies. The optimization environment is expected to support different levels of automation and programmer intervention, from fully-automated tuning to semi-automated development and to manual programmable control.

  11. Optimization of energy saving device combined with a propeller using real-coded genetic algorithm

    NASA Astrophysics Data System (ADS)

    Ryu, Tomohiro; Kanemaru, Takashi; Kataoka, Shiro; Arihama, Kiyoshi; Yoshitake, Akira; Arakawa, Daijiro; Ando, Jun

    2014-06-01

    This paper presents a numerical optimization method to improve the performance of the propeller with Turbo-Ring using real-coded genetic algorithm. In the presented method, Unimodal Normal Distribution Crossover (UNDX) and Minimal Generation Gap (MGG) model are used as crossover operator and generation-alternation model, respectively. Propeller characteristics are evaluated by a simple surface panel method "SQCM" in the optimization process. Blade sections of the original Turbo-Ring and propeller are replaced by the NACA66 a = 0.8 section. However, original chord, skew, rake and maximum blade thickness distributions in the radial direction are unchanged. Pitch and maximum camber distributions in the radial direction are selected as the design variables. Optimization is conducted to maximize the efficiency of the propeller with Turbo-Ring. The experimental result shows that the efficiency of the optimized propeller with Turbo-Ring is higher than that of the original propeller with Turbo-Ring.

  12. The SWAN/NPSOL code system for multivariable multiconstraint shield optimization

    SciTech Connect

    Watkins, E.F.; Greenspan, E.

    1995-12-31

    SWAN is a useful code for optimization of source-driven systems, i.e., systems for which the neutron and photon distribution is the solution of the inhomogeneous transport equation. Over the years, SWAN has been applied to the optimization of a variety of nuclear systems, such as minimizing the thickness of fusion reactor blankets and shields, the weight of space reactor shields, the cost for an ICF target chamber shield, and the background radiation for explosive detection systems and maximizing the beam quality for boron neutron capture therapy applications. However, SWAN`s optimization module can handle up to a single constraint and was inefficient in handling problems with many variables. The purpose of this work is to upgrade SWAN`s optimization capability.

  13. On the Optimized Atomic Exchange Potential method and the CASSANDRA opacity code

    NASA Astrophysics Data System (ADS)

    Jeffery, M.; Harris, J. W. O.; Hoarty, D. J.

    2016-09-01

    The CASSANDRA, average atom, opacity code uses the local density approximation (LDA) to calculate electron exchange interactions and this introduces inaccuracies due to the inconsistent treatment of the Coulomb and exchange energy terms of the average total energy equation. To correct this inconsistency, the Optimized Atomic Central Potential Method (OPM) of calculating exchange interactions has been incorporated into CASSANDRA. The LDA and OPM formalisms are discussed and the reason for the discrepancy when using the LDA is highlighted. CASSANDRA uses a Taylor series expansion about an average atom when computing transition energies and uses Janak's Theorem to determine the Taylor series coefficients. Janak's Theorem does not apply to the OPM; however, a corollary to Janak's Theorem has been employed in the OPM implementation. A derivation of this corollary is provided. Results of simulations from CASSANDRA using the OPM are shown and compared against CASSANDRA LDA, DAVROS (a detailed term accounting opacity code), the GRASP2K atomic physics code and experimental data.

  14. Optimization of a coded aperture coherent scatter spectral imaging system for medical imaging

    NASA Astrophysics Data System (ADS)

    Greenberg, Joel A.; Lakshmanan, Manu N.; Brady, David J.; Kapadia, Anuj J.

    2015-03-01

    Coherent scatter X-ray imaging is a technique that provides spatially-resolved information about the molecular structure of the material under investigation, yielding material-specific contrast that can aid medical diagnosis and inform treatment. In this study, we demonstrate a coherent-scatter imaging approach based on the use of coded apertures (known as coded aperture coherent scatter spectral imaging1, 2) that enables fast, dose-efficient, high-resolution scatter imaging of biologically-relevant materials. Specifically, we discuss how to optimize a coded aperture coherent scatter imaging system for a particular set of objects and materials, describe and characterize our experimental system, and use the system to demonstrate automated material detection in biological tissue.

  15. From Physics Model to Results: An Optimizing Framework for Cross-Architecture Code Generation

    DOE PAGESBeta

    Blazewicz, Marek; Hinder, Ian; Koppelman, David M.; Brandt, Steven R.; Ciznicki, Milosz; Kierzynka, Michal; Löffler, Frank; Schnetter, Erik; Tao, Jian

    2013-01-01

    Starting from a high-level problem description in terms of partial differential equations using abstract tensor notation, the Chemora framework discretizes, optimizes, and generates complete high performance codes for a wide range of compute architectures. Chemora extends the capabilities of Cactus, facilitating the usage of large-scale CPU/GPU systems in an efficient manner for complex applications, without low-level code tuning. Chemora achieves parallelism through MPI and multi-threading, combining OpenMP and CUDA. Optimizations include high-level code transformations, efficient loop traversal strategies, dynamically selected data and instruction cache usage strategies, and JIT compilation of GPU code tailored to the problem characteristics. The discretizationmore » is based on higher-order finite differences on multi-block domains. Chemora's capabilities are demonstrated by simulations of black hole collisions. This problem provides an acid test of the framework, as the Einstein equations contain hundreds of variables and thousands of terms.« less

  16. A unified framework of unsupervised subjective optimized bit allocation for multiple video object coding

    NASA Astrophysics Data System (ADS)

    Chen, Zhenzhong; Han, Junwei; Ngan, King Ngi

    2005-10-01

    MPEG-4 treats a scene as a composition of several objects or so-called video object planes (VOPs) that are separately encoded and decoded. Such a flexible video coding framework makes it possible to code different video object with different distortion scale. It is necessary to analyze the priority of the video objects according to its semantic importance, intrinsic properties and psycho-visual characteristics such that the bit budget can be distributed properly to video objects to improve the perceptual quality of the compressed video. This paper aims to provide an automatic video object priority definition method based on object-level visual attention model and further propose an optimization framework for video object bit allocation. One significant contribution of this work is that the human visual system characteristics are incorporated into the video coding optimization process. Another advantage is that the priority of the video object can be obtained automatically instead of fixing weighting factors before encoding or relying on the user interactivity. To evaluate the performance of the proposed approach, we compare it with traditional verification model bit allocation and the optimal multiple video object bit allocation algorithms. Comparing with traditional bit allocation algorithms, the objective quality of the object with higher priority is significantly improved under this framework. These results demonstrate the usefulness of this unsupervised subjective quality lifting framework.

  17. Heuristic ternary error-correcting output codes via weight optimization and layered clustering-based approach.

    PubMed

    Zhang, Xiao-Lei

    2015-02-01

    One important classifier ensemble for multiclass classification problems is error-correcting output codes (ECOCs). It bridges multiclass problems and binary-class classifiers by decomposing multiclass problems to a serial binary-class problems. In this paper, we present a heuristic ternary code, named weight optimization and layered clustering-based ECOC (WOLC-ECOC). It starts with an arbitrary valid ECOC and iterates the following two steps until the training risk converges. The first step, named layered clustering-based ECOC (LC-ECOC), constructs multiple strong classifiers on the most confusing binary-class problem. The second step adds the new classifiers to ECOC by a novel optimized weighted (OW) decoding algorithm, where the optimization problem of the decoding is solved by the cutting plane algorithm. Technically, LC-ECOC makes the heuristic training process not blocked by some difficult binary-class problem. OW decoding guarantees the nonincrease of the training risk for ensuring a small code length. Results on 14 UCI datasets and a music genre classification problem demonstrate the effectiveness of WOLC-ECOC. PMID:25486660

  18. An application of anti-optimization in the process of validating aerodynamic codes

    NASA Astrophysics Data System (ADS)

    Cruz, Juan R.

    An investigation was conducted to assess the usefulness of anti-optimization in the process of validating of aerodynamic codes. Anti-optimization is defined here as the intentional search for regions where the computational and experimental results disagree. Maximizing such disagreements can be a useful tool in uncovering errors and/or weaknesses in both analyses and experiments. The codes chosen for this investigation were an airfoil code and a lifting line code used together as an analysis to predict three-dimensional wing aerodynamic coefficients. The parameter of interest was the maximum lift coefficient of the three-dimensional wing, CL max. The test domain encompassed Mach numbers from 0.3 to 0.8, and Reynolds numbers from 25,000 to 250,000. A simple rectangular wing was designed for the experiment. A wind tunnel model of this wing was built and tested in the NASA Langley Transonic Dynamics Tunnel. Selection of the test conditions (i.e., Mach and Reynolds numbers) were made by applying the techniques of response surface methodology and considerations involving the predicted experimental uncertainty. The test was planned and executed in two phases. In the first phase runs were conducted at the pre-planned test conditions. Based on these results additional runs were conducted in areas where significant differences in CL max were observed between the computational results and the experiment---in essence applying the concept of anti-optimization. These additional runs were used to verify the differences in CL max and assess the extent of the region where these differences occurred. The results of the experiment showed that the analysis was capable of predicting CL max to within 0.05 over most of the test domain. The application of anti-optimization succeeded in identifying a region where the computational and experimental values of C L max differed by more than 0.05, demonstrating the usefulness of anti-optimization in process of validating aerodynamic codes

  19. On the Efficacy of Source Code Optimizations for Cache-Based Systems

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Saphir, William C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    Obtaining high performance without machine-specific tuning is an important goal of scientific application programmers. Since most scientific processing is done on commodity microprocessors with hierarchical memory systems, this goal of "portable performance" can be achieved if a common set of optimization principles is effective for all such systems. It is widely believed, or at least hoped, that portable performance can be realized. The rule of thumb for optimization on hierarchical memory systems is to maximize temporal and spatial locality of memory references by reusing data and minimizing memory access stride. We investigate the effects of a number of optimizations on the performance of three related kernels taken from a computational fluid dynamics application. Timing the kernels on a range of processors, we observe an inconsistent and often counterintuitive impact of the optimizations on performance. In particular, code variations that have a positive impact on one architecture can have a negative impact on another, and variations expected to be unimportant can produce large effects. Moreover, we find that cache miss rates-as reported by a cache simulation tool, and confirmed by hardware counters-only partially explain the results. By contrast, the compiler-generated assembly code provides more insight by revealing the importance of processor-specific instructions and of compiler maturity, both of which strongly, and sometimes unexpectedly, influence performance. We conclude that it is difficult to obtain performance portability on modern cache-based computers, and comment on the implications of this result.

  20. On the Efficacy of Source Code Optimizations for Cache-Based Systems

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Saphir, William C.

    1998-01-01

    Obtaining high performance without machine-specific tuning is an important goal of scientific application programmers. Since most scientific processing is done on commodity microprocessors with hierarchical memory systems, this goal of "portable performance" can be achieved if a common set of optimization principles is effective for all such systems. It is widely believed, or at least hoped, that portable performance can be realized. The rule of thumb for optimization on hierarchical memory systems is to maximize temporal and spatial locality of memory references by reusing data and minimizing memory access stride. We investigate the effects of a number of optimizations on the performance of three related kernels taken from a computational fluid dynamics application. Timing the kernels on a range of processors, we observe an inconsistent and often counterintuitive impact of the optimizations on performance. In particular, code variations that have a positive impact on one architecture can have a negative impact on another, and variations expected to be unimportant can produce large effects. Moreover, we find that cache miss rates - as reported by a cache simulation tool, and confirmed by hardware counters - only partially explain the results. By contrast, the compiler-generated assembly code provides more insight by revealing the importance of processor-specific instructions and of compiler maturity, both of which strongly, and sometimes unexpectedly, influence performance. We conclude that it is difficult to obtain performance portability on modern cache-based computers, and comment on the implications of this result.

  1. Development of free-piston Stirling engine performance and optimization codes based on Martini simulation technique

    NASA Technical Reports Server (NTRS)

    Martini, William R.

    1989-01-01

    A FORTRAN computer code is described that could be used to design and optimize a free-displacer, free-piston Stirling engine similar to the RE-1000 engine made by Sunpower. The code contains options for specifying displacer and power piston motion or for allowing these motions to be calculated by a force balance. The engine load may be a dashpot, inertial compressor, hydraulic pump or linear alternator. Cycle analysis may be done by isothermal analysis or adiabatic analysis. Adiabatic analysis may be done using the Martini moving gas node analysis or the Rios second-order Runge-Kutta analysis. Flow loss and heat loss equations are included. Graphical display of engine motions and pressures and temperatures are included. Programming for optimizing up to 15 independent dimensions is included. Sample performance results are shown for both specified and unconstrained piston motions; these results are shown as generated by each of the two Martini analyses. Two sample optimization searches are shown using specified piston motion isothermal analysis. One is for three adjustable input and one is for four. Also, two optimization searches for calculated piston motion are presented for three and for four adjustable inputs. The effect of leakage is evaluated. Suggestions for further work are given.

  2. Finite population analysis of the effect of horizontal gene transfer on the origin of an universal and optimal genetic code

    NASA Astrophysics Data System (ADS)

    Aggarwal, Neha; Vishwa Bandhu, Ashutosh; Sengupta, Supratim

    2016-06-01

    The origin of a universal and optimal genetic code remains a compelling mystery in molecular biology and marks an essential step in the origin of DNA and protein based life. We examine a collective evolution model of genetic code origin that allows for unconstrained horizontal transfer of genetic elements within a finite population of sequences each of which is associated with a genetic code selected from a pool of primordial codes. We find that when horizontal transfer of genetic elements is incorporated in this more realistic model of code-sequence coevolution in a finite population, it can increase the likelihood of emergence of a more optimal code eventually leading to its universality through fixation in the population. The establishment of such an optimal code depends on the probability of HGT events. Only when the probability of HGT events is above a critical threshold, we find that the ten amino acid code having a structure that is most consistent with the standard genetic code (SGC) often gets fixed in the population with the highest probability. We examine how the threshold is determined by factors like the population size, length of the sequences and selection coefficient. Our simulation results reveal the conditions under which sharing of coding innovations through horizontal transfer of genetic elements may have facilitated the emergence of a universal code having a structure similar to that of the SGC.

  3. Finite population analysis of the effect of horizontal gene transfer on the origin of an universal and optimal genetic code.

    PubMed

    Aggarwal, Neha; Bandhu, Ashutosh Vishwa; Sengupta, Supratim

    2016-01-01

    The origin of a universal and optimal genetic code remains a compelling mystery in molecular biology and marks an essential step in the origin of DNA and protein based life. We examine a collective evolution model of genetic code origin that allows for unconstrained horizontal transfer of genetic elements within a finite population of sequences each of which is associated with a genetic code selected from a pool of primordial codes. We find that when horizontal transfer of genetic elements is incorporated in this more realistic model of code-sequence coevolution in a finite population, it can increase the likelihood of emergence of a more optimal code eventually leading to its universality through fixation in the population. The establishment of such an optimal code depends on the probability of HGT events. Only when the probability of HGT events is above a critical threshold, we find that the ten amino acid code having a structure that is most consistent with the standard genetic code (SGC) often gets fixed in the population with the highest probability. We examine how the threshold is determined by factors like the population size, length of the sequences and selection coefficient. Our simulation results reveal the conditions under which sharing of coding innovations through horizontal transfer of genetic elements may have facilitated the emergence of a universal code having a structure similar to that of the SGC. PMID:27232957

  4. Improvement of BER performance in MIMO-CDMA systems by using initial-phase optimized gold codes

    NASA Astrophysics Data System (ADS)

    Develi, Ibrahim; Filiz, Meryem

    2013-01-01

    This paper describes a new approach to improve the bit error rate (BER) performance of a multiple-input multiple-output code-division multiple-access (MIMO-CDMA) system over quasi-static Rayleigh fading channels. The system considered employs robust space-time successive interference cancellation detectors and initial-phase optimized Gold codes for the improvement. The results clearly indicate that the use of initial-phase optimized Gold codes can significantly improve the BER performance of the system compared to the performance of a multiuser MIMO-CDMA system with conventional nonoptimized Gold codes. Furthermore, this performance improvement is achieved without any increase in system complexity.

  5. Optimizing the search for high-z GRBs:. the JANUS X-ray coded aperture telescope

    NASA Astrophysics Data System (ADS)

    Burrows, D. N.; Fox, D.; Palmer, D.; Romano, P.; Mangano, V.; La Parola, V.; Falcone, A. D.; Roming, P. W. A.

    We discuss the optimization of gamma-ray burst (GRB) detectors with a goal of maximizing the detected number of bright high-redshift GRBs, in the context of design studies conducted for the X-ray transient detector on the JANUS mission. We conclude that the optimal energy band for detection of high-z GRBs is below about 30 keV. We considered both lobster-eye and coded aperture designs operating in this energy band. Within the available mass and power constraints, we found that the coded aperture mask was preferred for the detection of high-z bursts with bright enough afterglows to probe galaxies in the era of the Cosmic Dawn. This initial conclusion was confirmed through detailed mission simulations that found that the selected design (an X-ray Coded Aperture Telescope) would detect four times as many bright, high-z GRBs as the lobster-eye design we considered. The JANUS XCAT instrument will detect 48 GRBs with z>5 and fluence S_x > 3 × 10-7 erg cm-2 in a two year mission.

  6. The SWAN-SCALE code for the optimization of critical systems

    SciTech Connect

    Greenspan, E.; Karni, Y.; Regev, D.; Petrie, L.M.

    1999-07-01

    The SWAN optimization code was recently developed to identify the maximum value of k{sub eff} for a given mass of fissile material when in combination with other specified materials. The optimization process is iterative; in each iteration SWAN varies the zone-dependent concentration of the system constituents. This change is guided by the equal volume replacement effectiveness functions (EVREF) that SWAN generates using first-order perturbation theory. Previously, SWAN did not have provisions to account for the effect of the composition changes on neutron cross-section resonance self-shielding; it used the cross sections corresponding to the initial system composition. In support of the US Department of Energy Nuclear Criticality Safety Program, the authors recently removed the limitation on resonance self-shielding by coupling SWAN with the SCALE code package. The purpose of this paper is to briefly describe the resulting SWAN-SCALE code and to illustrate the effect that neutron cross-section self-shielding could have on the maximum k{sub eff} and on the corresponding system composition.

  7. An optimal unequal error protection scheme with turbo product codes for wavelet compression of ultraspectral sounder data

    NASA Astrophysics Data System (ADS)

    Huang, Bormin; Sriraja, Y.; Ahuja, Alok; Goldberg, Mitchell D.

    2006-08-01

    Most source coding techniques generate bitstream where different regions have unequal influences on data reconstruction. An uncorrected error in a more influential region can cause more error propagation in the reconstructed data. Given a limited bandwidth, unequal error protection (UEP) via channel coding with different code rates for different regions of bitstream may yield much less error contamination than equal error protection (EEP). We propose an optimal UEP scheme that minimizes error contamination after channel and source decoding. We use JPEG2000 for source coding and turbo product code (TPC) for channel coding as an example to demonstrate this technique with ultraspectral sounder data. Wavelet compression yields unequal significance in different wavelet resolutions. In the proposed UEP scheme, the statistics of erroneous pixels after TPC and JPEG2000 decoding are used to determine the optimal channel code rates for each wavelet resolution. The proposed UEP scheme significantly reduces the number of pixel errors when compared to its EEP counterpart. In practice, with a predefined set of implementation parameters (available channel codes, desired code rate, noise level, etc.), the optimal code rate allocation for UEP needs to be determined only once and can be done offline.

  8. Optimizing performance of superscalar codes for a single Cray X1MSP processor

    SciTech Connect

    Shan, Hongzhang; Strohmaier, Erich; Oliker, Leonid

    2004-06-08

    The growing gap between sustained and peak performance for full-scale complex scientific applications on conventional supercomputers is a major concern in high performance computing. The recently-released vector-based Cray X1 offers to bridge this gap for many demanding scientific applications. However, this unique architecture contains both data caches and multi-streaming processing units, and the optimal programming methodology is still under investigation. In this paper we investigate Cray X1 code optimization for a suite of computational kernels originally designed for superscalar processors. For our study, we select four applications from the SPLASH2 application suite (1-D FFT,Radix, Ocean, and Nbody), two kernels from the NAS benchmark suite (3-DFFT and CG), and a matrix-matrix multiplication kernel. Results show that for many cases, the addition of vectorization compiler directives results faster runtimes. However, to achieve a significant performance improvement via increased vector length, it is often necessary to restructure the program at the source level sometimes leading to algorithmic level transformations. Additionally, memory bank conflicts may result in substantial performance losses. These conflicts can often be exacerbated when optimizing code for increased vector lengths, and must be explicitly minimized. Finally, we investigate the relationship of the X1 data caches on overall performance.

  9. Operationally optimal vertex-based shape coding with arbitrary direction edge encoding structures

    NASA Astrophysics Data System (ADS)

    Lai, Zhongyuan; Zhu, Junhuan; Luo, Jiebo

    2014-07-01

    The intention of shape coding in the MPEG-4 is to improve the coding efficiency as well as to facilitate the object-oriented applications, such as shape-based object recognition and retrieval. These require both efficient shape compression and effective shape description. Although these two issues have been intensively investigated in data compression and pattern recognition fields separately, it remains an open problem when both objectives need to be considered together. To achieve high coding gain, the operational rate-distortion optimal framework can be applied, but the direction restriction of the traditional eight-direction edge encoding structure reduces its compression efficiency and description effectiveness. We present two arbitrary direction edge encoding structures to relax this direction restriction. They consist of a sector number, a short component, and a long component, which represent both the direction and the magnitude information of an encoding edge. Experiments on both shape coding and hand gesture recognition validate that our structures can reduce a large number of encoding vertices and save up to 48.9% bits. Besides, the object contours are effectively described and suitable for the object-oriented applications.

  10. Three-dimensional polarization marked multiple-QR code encryption by optimizing a single vectorial beam

    NASA Astrophysics Data System (ADS)

    Lin, Chao; Shen, Xueju; Hua, Binbin; Wang, Zhisong

    2015-10-01

    We demonstrate the feasibility of three dimensional (3D) polarization multiplexing by optimizing a single vectorial beam using a multiple-signal window multiple-plane (MSW-MP) phase retrieval algorithm. Original messages represented with multiple quick response (QR) codes are first partitioned into a series of subblocks. Then, each subblock is marked with a specific polarization state and randomly distributed in 3D space with both longitudinal and transversal adjustable freedoms. A generalized 3D polarization mapping protocol is established to generate a 3D polarization key. Finally, multiple-QR code is encrypted into one phase only mask and one polarization only mask based on the modified Gerchberg-Saxton (GS) algorithm. We take the polarization mask as the cyphertext and the phase only mask as additional dimension of key. Only when both the phase key and 3D polarization key are correct, original messages can be recovered. We verify our proposal with both simulation and experiment evidences.

  11. Performance and optimization of direct implicit time integration schemes for use in electrostatic particle simulation codes

    SciTech Connect

    Procassini, R.J.; Birdsall, C.K.; Morse, E.C.; Cohen, B.I.

    1988-01-01

    Implicit time integration schemes allow for the use of larger time steps than conventional explicit methods, thereby extending the applicability of kinetic particle simulation methods. This paper will describe a study of the performance and optimization of two such direct implicit schemes, which are used to follow the trajectories of charged particles in an electrostatic, particle-in-cell plasma simulation code. The direct implicit method that was used for this study is an alternative to the moment-equation implicit method. 10 refs., 7 figs., 4 tabs.

  12. Code Optimization and Parallelization on the Origins: Looking from Users' Perspective

    NASA Technical Reports Server (NTRS)

    Chang, Yan-Tyng Sherry; Thigpen, William W. (Technical Monitor)

    2002-01-01

    Parallel machines are becoming the main compute engines for high performance computing. Despite their increasing popularity, it is still a challenge for most users to learn the basic techniques to optimize/parallelize their codes on such platforms. In this paper, we present some experiences on learning these techniques for the Origin systems at the NASA Advanced Supercomputing Division. Emphasis of this paper will be on a few essential issues (with examples) that general users should master when they work with the Origins as well as other parallel systems.

  13. An Integer-Coded Chaotic Particle Swarm Optimization for Traveling Salesman Problem

    NASA Astrophysics Data System (ADS)

    Yue, Chen; Yan-Duo, Zhang; Jing, Lu; Hui, Tian

    Traveling Salesman Problem (TSP) is one of NP-hard combinatorial optimization problems, which will experience “combination explosion” when the problem goes beyond a certain size. Therefore, it has been a hot topic to search an effective solving method. The general mathematical model of TSP is discussed, and its permutation and combination based model is presented. Based on these, Integer-coded Chaotic Particle Swarm Optimization for solving TSP is proposed. Where, particle is encoded with integer; chaotic sequence is used to guide global search; and particle varies its positions via “flying”. With a typical 20-citys TSP as instance, the simulation experiment of comparing ICPSO with GA is carried out. Experimental results demonstrate that ICPSO is simple but effective, and better than GA at performance.

  14. Optimization and Openmp Parallelization of a Discrete Element Code for Convex Polyhedra on Multi-Core Machines

    NASA Astrophysics Data System (ADS)

    Chen, Jian; Matuttis, Hans-Georg

    2013-02-01

    We report our experiences with the optimization and parallelization of a discrete element code for convex polyhedra on multi-core machines and introduce a novel variant of the sort-and-sweep neighborhood algorithm. While in theory the whole code in itself parallelizes ideally, in practice the results on different architectures with different compilers and performance measurement tools depend very much on the particle number and optimization of the code. After difficulties with the interpretation of the data for speedup and efficiency are overcome, respectable parallelization speedups could be obtained.

  15. Optimized conical shaped charge design using the SCAP (Shaped Charge Analysis Program) code

    SciTech Connect

    Vigil, M.G.

    1988-09-01

    The Shaped Charge Analysis Program (SCAP) is used to analytically model and optimize the design of Conical Shaped Charges (CSC). A variety of existing CSCs are initially modeled with the SCAP code and the predicted jet tip velocities, jet penetrations, and optimum standoffs are compared to previously published experimental results. The CSCs vary in size from 0.69 inch (1.75 cm) to 9.125 inch (23.18 cm) conical liner inside diameter. Two liner materials (copper and steel) and several explosives (Octol, Comp B, PBX-9501) are included in the CSCs modeled. The target material was mild steel. A parametric study was conducted using the SCAP code to obtain the optimum design for a 3.86 inch (9.8 cm) CSC. The variables optimized in this study included the CSC apex angle, conical liner thickness, explosive height, optimum standoff, tamper/confinement thickness, and explosive width. The non-dimensionalized jet penetration to diameter ratio versus the above parameters are graphically presented. 12 refs., 10 figs., 7 tabs.

  16. Optimization of wavefront-coded infinity-corrected microscope systems with extended depth of field

    PubMed Central

    Zhao, Tingyu; Mauger, Thomas; Li, Guoqiang

    2013-01-01

    The depth of field of an infinity-corrected microscope system is greatly extended by simply applying a specially designed phase mask between the objective and the tube lens. In comparison with the method of modifying the structure of objective, it is more cost effective and provides improved flexibility for assembling the system. Instead of using an ideal optical system for simulation which was the focus of the previous research, a practical wavefront-coded infinity-corrected microscope system is designed in this paper by considering the various aberrations. Two new optimization methods, based on the commercial optical design software, are proposed to design a wavefront-coded microscope using a non-symmetric phase mask and a symmetric phase mask, respectively. We use polynomial phase mask and rational phase mask as examples of the non-symmetric and symmetric phase masks respectively. Simulation results show that both optimization methods work well for a 32 × infinity-corrected microscope system with 0.6 numerical aperture. The depth of field is extended to about 13 times of the traditional one. PMID:24010008

  17. Acceleration of the Geostatistical Software Library (GSLIB) by code optimization and hybrid parallel programming

    NASA Astrophysics Data System (ADS)

    Peredo, Oscar; Ortiz, Julián M.; Herrero, José R.

    2015-12-01

    The Geostatistical Software Library (GSLIB) has been used in the geostatistical community for more than thirty years. It was designed as a bundle of sequential Fortran codes, and today it is still in use by many practitioners and researchers. Despite its widespread use, few attempts have been reported in order to bring this package to the multi-core era. Using all CPU resources, GSLIB algorithms can handle large datasets and grids, where tasks are compute- and memory-intensive applications. In this work, a methodology is presented to accelerate GSLIB applications using code optimization and hybrid parallel processing, specifically for compute-intensive applications. Minimal code modifications are added decreasing as much as possible the elapsed time of execution of the studied routines. If multi-core processing is available, the user can activate OpenMP directives to speed up the execution using all resources of the CPU. If multi-node processing is available, the execution is enhanced using MPI messages between the compute nodes.Four case studies are presented: experimental variogram calculation, kriging estimation, sequential gaussian and indicator simulation. For each application, three scenarios (small, large and extra large) are tested using a desktop environment with 4 CPU-cores and a multi-node server with 128 CPU-nodes. Elapsed times, speedup and efficiency results are shown.

  18. Motion estimation optimization tools for the emerging high efficiency video coding (HEVC)

    NASA Astrophysics Data System (ADS)

    Abdelazim, Abdelrahman; Masri, Wassim; Noaman, Bassam

    2014-02-01

    Recent development in hardware and software allowed a new generation of video quality. However, the development in networking and digital communication is lagging behind. This prompted the establishment of the Joint Collaborative Team on Video Coding (JCT-VC), with an objective to develop a new high-performance video coding standard. A primary reason for developing the HEVC was to enable efficient processing and transmission for HD videos that normally contain large smooth areas; therefore, the HEVC utilizes larger encoding blocks than the previous standard to enable more effective encoding, while smaller blocks are still exploited to encode fast/complex areas of video more efficiently. Hence, the implementation of the encoder investigates all the possible block sizes. This and many added features on the new standard have led to significant increase in the complexity of the encoding process. Furthermore, there is not an automated process to decide on when large blocks or small blocks should be exploited. To overcome this problem, this research proposes a set of optimization tools to reduce the encoding complexity while maintaining the same quality and compression rate. The method automates this process through a set of hierarchical steps yet using the standard refined coding tools.

  19. Optimal analysis of ultra broadband energy-time entanglement for high bit-rate dense wavelength division multiplexed quantum networks

    NASA Astrophysics Data System (ADS)

    Kaiser, F.; Aktas, D.; Fedrici, B.; Lunghi, T.; Labonté, L.; Tanzilli, S.

    2016-06-01

    We demonstrate an experimental method for measuring energy-time entanglement over almost 80 nm spectral bandwidth in a single shot with a quantum bit error rate below 0.5%. Our scheme is extremely cost-effective and efficient in terms of resources as it employs only one source of entangled photons and one fixed unbalanced interferometer per phase-coded analysis basis. We show that the maximum analysis spectral bandwidth is obtained when the analysis interferometers are properly unbalanced, a strategy which can be straightforwardly applied to most of today's experiments based on energy-time and time-bin entanglement. Our scheme has therefore a great potential for boosting bit rates and reducing the resource overhead of future entanglement-based quantum key distribution systems.

  20. Bio-optimized energy transfer in densely packed fluorescent protein enables near-maximal luminescence and solid-state lasers

    NASA Astrophysics Data System (ADS)

    Gather, Malte C.; Yun, Seok Hyun

    2014-12-01

    Bioluminescent organisms are likely to have an evolutionary drive towards high radiance. As such, bio-optimized materials derived from them hold great promise for photonic applications. Here, we show that biologically produced fluorescent proteins retain their high brightness even at the maximum density in solid state through a special molecular structure that provides optimal balance between high protein concentration and low resonance energy transfer self-quenching. Dried films of green fluorescent protein show low fluorescence quenching (-7 dB) and support strong optical amplification (gnet=22 cm-1 96 dB cm-1). Using these properties, we demonstrate vertical cavity surface emitting micro-lasers with low threshold (<100 pJ, outperforming organic semiconductor lasers) and self-assembled all-protein ring lasers. Moreover, solid-state blends of different proteins support efficient Förster resonance energy transfer, with sensitivity to intermolecular distance thus allowing all-optical sensing. The design of fluorescent proteins may be exploited for bio-inspired solid-state luminescent molecules or nanoparticles.

  1. Detection optimization using linear systems analysis of a coded aperture laser sensor system

    SciTech Connect

    Gentry, S.M.

    1994-09-01

    Minimum detectable irradiance levels for a diffraction grating based laser sensor were calculated to be governed by clutter noise resulting from reflected earth albedo. Features on the earth surface caused pseudo-imaging effects on the sensor`s detector arras that resulted in the limiting noise in the detection domain. It was theorized that a custom aperture transmission function existed that would optimize the detection of laser sources against this clutter background. Amplitude and phase aperture functions were investigated. Compared to the diffraction grating technique, a classical Young`s double-slit aperture technique was investigated as a possible optimized solution but was not shown to produce a system that had better clutter-noise limited minimum detectable irradiance. Even though the double-slit concept was not found to have a detection advantage over the slit-grating concept, one interesting concept grew out of the double-slit design that deserved mention in this report, namely the Barker-coded double-slit. This diffractive aperture design possessed properties that significantly improved the wavelength accuracy of the double-slit design. While a concept was not found to beat the slit-grating concept, the methodology used for the analysis and optimization is an example of the application of optoelectronic system-level linear analysis. The techniques outlined here can be used as a template for analysis of a wide range of optoelectronic systems where the entire system, both optical and electronic, contribute to the detection of complex spatial and temporal signals.

  2. Optimization and implementation of the integer wavelet transform for image coding.

    PubMed

    Grangetto, Marco; Magli, Enrico; Martina, Maurizio; Olmo, Gabriella

    2002-01-01

    This paper deals with the design and implementation of an image transform coding algorithm based on the integer wavelet transform (IWT). First of all, criteria are proposed for the selection of optimal factorizations of the wavelet filter polyphase matrix to be employed within the lifting scheme. The obtained results lead to the IWT implementations with very satisfactory lossless and lossy compression performance. Then, the effects of finite precision representation of the lifting coefficients on the compression performance are analyzed, showing that, in most cases, a very small number of bits can be employed for the mantissa keeping the performance degradation very limited. Stemming from these results, a VLSI architecture is proposed for the IWT implementation, capable of achieving very high frame rates with moderate gate complexity. PMID:18244658

  3. Real Time Optimizing Code for Stabilization and Control of Plasma Reactors

    Energy Science and Technology Software Center (ESTSC)

    1995-09-25

    LOOP4 is a flexible real-time control code that acquires signals (input variables) from an array of sensors, that computes therefrom the actual state of the reactor system, that compares the actual state to the desired state (a goal), and that commands changes to reactor controls (output, or manipulated variables) in order to minimize the difference between the actual state of the reactor and the desired state. The difference between actual and desired states is quantifiedmore » in terms of a distance metric in the space defined by the sensor measurements. The desired state of the reactor is specified in terms of target values of sensor readings that were obtained previously during development and optimization of a process engineer using conventional techniques.« less

  4. Optimization of Parallel Legendre Transform using Graphics Processing Unit (GPU) for a Geodynamo Code

    NASA Astrophysics Data System (ADS)

    Lokavarapu, H. V.; Matsui, H.

    2015-12-01

    Convection and magnetic field of the Earth's outer core are expected to have vast length scales. To resolve these flows, high performance computing is required for geodynamo simulations using spherical harmonics transform (SHT), a significant portion of the execution time is spent on the Legendre transform. Calypso is a geodynamo code designed to model magnetohydrodynamics of a Boussinesq fluid in a rotating spherical shell, such as the outer core of the Earth. The code has been shown to scale well on computer clusters capable of computing at the order of 10⁵ cores using Message Passing Interface (MPI) and Open Multi-Processing (OpenMP) parallelization for CPUs. To further optimize, we investigate three different algorithms of the SHT using GPUs. One is to preemptively compute the Legendre polynomials on the CPU before executing SHT on the GPU within the time integration loop. In the second approach, both the Legendre polynomials and the SHT are computed on the GPU simultaneously. In the third approach , we initially partition the radial grid for the forward transform and the harmonic order for the backward transform between the CPU and GPU. There after, the partitioned works are simultaneously computed in the time integration loop. We examine the trade-offs between space and time, memory bandwidth and GPU computations on Maverick, a Texas Advanced Computing Center (TACC) supercomputer. We have observed improved performance using a GPU enabled Legendre transform. Furthermore, we will compare and contrast the different algorithms in the context of GPUs.

  5. Analytical computation of the derivative of PSF for the optimization of phase mask in wavefront coding system.

    PubMed

    Chen, Xinhua; Zhou, Jiankang; Shen, Weimin

    2016-09-01

    Wavefront coding system can realize defocus invariance of PSF/OTF with a phase mask inserting in the pupil plane. Ideally, the derivative of the PSF/OTF with respect to defocus error should be close to zero as much as possible over the extended depth of field/focus for the wavefront coding system. In this paper, we propose an analytical expression for the computation of the derivative of PSF. With this expression, the derivative of PSF based merit function can be used in the optimization of the wavefront coding system with any type of phase mask and aberrations. Computation of the derivative of PSF using the proposed expression and FFT respectively are compared and discussed. We also demonstrate the optimization of a generic polynomial phase mask in wavefront coding system as an example. PMID:27607710

  6. The DOPEX code: An application of the method of steepest descent to laminated-shield-weight optimization with several constraints

    NASA Technical Reports Server (NTRS)

    Lahti, G. P.

    1972-01-01

    A two- or three-constraint, two-dimensional radiation shield weight optimization procedure and a computer program, DOPEX, is described. The DOPEX code uses the steepest descent method to alter a set of initial (input) thicknesses for a shield configuration to achieve a minimum weight while simultaneously satisfying dose constaints. The code assumes an exponential dose-shield thickness relation with parameters specified by the user. The code also assumes that dose rates in each principal direction are dependent only on thicknesses in that direction. Code input instructions, FORTRAN 4 listing, and a sample problem are given. Typical computer time required to optimize a seven-layer shield is about 0.1 minute on an IBM 7094-2.

  7. Variational-average-atom-in-quantum-plasmas (VAAQP) code and virial theorem: Equation-of-state and shock-Hugoniot calculations for warm dense Al, Fe, Cu, and Pb

    SciTech Connect

    Piron, R.; Blenski, T.

    2011-02-15

    The numerical code VAAQP (variational average atom in quantum plasmas), which is based on a fully variational model of equilibrium dense plasmas, is applied to equation-of-state calculations for aluminum, iron, copper, and lead in the warm-dense-matter regime. VAAQP does not impose the neutrality of the Wigner-Seitz ion sphere; it provides the average-atom structure and the mean ionization self-consistently from the solution of the variational equations. The formula used for the electronic pressure is simple and does not require any numerical differentiation. In this paper, the virial theorem is derived in both nonrelativistic and relativistic versions of the model. This theorem allows one to express the electron pressure as a combination of the electron kinetic and interaction energies. It is shown that the model fulfills automatically the virial theorem in the case of local-density approximations to the exchange-correlation free-energy. Applications of the model to the equation-of-state and Hugoniot shock adiabat of aluminum, iron, copper, and lead in the warm-dense-matter regime are presented. Comparisons with other approaches, including the inferno model, and with available experimental data are given. This work allows one to understand the thermodynamic consistency issues in the existing average-atom models. Starting from the case of aluminum, a comparative study of the thermodynamic consistency of the models is proposed. A preliminary study of the validity domain of the inferno model is also included.

  8. Variational-average-atom-in-quantum-plasmas (VAAQP) code and virial theorem: equation-of-state and shock-Hugoniot calculations for warm dense Al, Fe, Cu, and Pb.

    PubMed

    Piron, R; Blenski, T

    2011-02-01

    The numerical code VAAQP (variational average atom in quantum plasmas), which is based on a fully variational model of equilibrium dense plasmas, is applied to equation-of-state calculations for aluminum, iron, copper, and lead in the warm-dense-matter regime. VAAQP does not impose the neutrality of the Wigner-Seitz ion sphere; it provides the average-atom structure and the mean ionization self-consistently from the solution of the variational equations. The formula used for the electronic pressure is simple and does not require any numerical differentiation. In this paper, the virial theorem is derived in both nonrelativistic and relativistic versions of the model. This theorem allows one to express the electron pressure as a combination of the electron kinetic and interaction energies. It is shown that the model fulfills automatically the virial theorem in the case of local-density approximations to the exchange-correlation free-energy. Applications of the model to the equation-of-state and Hugoniot shock adiabat of aluminum, iron, copper, and lead in the warm-dense-matter regime are presented. Comparisons with other approaches, including the inferno model, and with available experimental data are given. This work allows one to understand the thermodynamic consistency issues in the existing average-atom models. Starting from the case of aluminum, a comparative study of the thermodynamic consistency of the models is proposed. A preliminary study of the validity domain of the inferno model is also included. PMID:21405914

  9. Symmetry-based coding method and synthesis topology optimization design of ultra-wideband polarization conversion metasurfaces

    NASA Astrophysics Data System (ADS)

    Sui, Sai; Ma, Hua; Wang, Jiafu; Feng, Mingde; Pang, Yongqiang; Xia, Song; Xu, Zhuo; Qu, Shaobo

    2016-07-01

    In this letter, we propose the synthesis topology optimization method of designing ultra-wideband polarization conversion metasurface for linearly polarized waves. The general design principle of polarization conversion metasurfaces is derived theoretically. Symmetry-based coding, with shorter coding length and better optimization efficiency, is then proposed. As an example, a topological metasurface is demonstrated with an ultra-wideband polarization conversion property. The results of both simulations and experiments show that the metasurface can convert linearly polarized waves into cross-polarized waves in 8.0-30.0 GHz, obtaining the property of ultra-wideband polarization conversion based on metasurfaces, and hence validating the synthesis design method. The proposed method combines the merits of topology optimization and symmetry-based coding method, which provides an efficient tool for the design of high-performance polarization conversion metasurfaces.

  10. Stereoscopic Visual Attention-Based Regional Bit Allocation Optimization for Multiview Video Coding

    NASA Astrophysics Data System (ADS)

    Zhang, Yun; Jiang, Gangyi; Yu, Mei; Chen, Ken; Dai, Qionghai

    2010-12-01

    We propose a Stereoscopic Visual Attention- (SVA-) based regional bit allocation optimization for Multiview Video Coding (MVC) by the exploiting visual redundancies from human perceptions. We propose a novel SVA model, where multiple perceptual stimuli including depth, motion, intensity, color, and orientation contrast are utilized, to simulate the visual attention mechanisms of human visual system with stereoscopic perception. Then, a semantic region-of-interest (ROI) is extracted based on the saliency maps of SVA. Both objective and subjective evaluations of extracted ROIs indicated that the proposed SVA model based on ROI extraction scheme outperforms the schemes only using spatial or/and temporal visual attention clues. Finally, by using the extracted SVA-based ROIs, a regional bit allocation optimization scheme is presented to allocate more bits on SVA-based ROIs for high image quality and fewer bits on background regions for efficient compression purpose. Experimental results on MVC show that the proposed regional bit allocation algorithm can achieve over [InlineEquation not available: see fulltext.]% bit-rate saving while maintaining the subjective image quality. Meanwhile, the image quality of ROIs is improved by [InlineEquation not available: see fulltext.] dB at the cost of insensitive image quality degradation of the background image.

  11. An Optimal Pull-Push Scheduling Algorithm Based on Network Coding for Mesh Peer-to-Peer Live Streaming

    NASA Astrophysics Data System (ADS)

    Cui, Laizhong; Jiang, Yong; Wu, Jianping; Xia, Shutao

    Most large-scale Peer-to-Peer (P2P) live streaming systems are constructed as a mesh structure, which can provide robustness in the dynamic P2P environment. The pull scheduling algorithm is widely used in this mesh structure, which degrades the performance of the entire system. Recently, network coding was introduced in mesh P2P streaming systems to improve the performance, which makes the push strategy feasible. One of the most famous scheduling algorithms based on network coding is R2, with a random push strategy. Although R2 has achieved some success, the push scheduling strategy still lacks a theoretical model and optimal solution. In this paper, we propose a novel optimal pull-push scheduling algorithm based on network coding, which consists of two stages: the initial pull stage and the push stage. The main contributions of this paper are: 1) we put forward a theoretical analysis model that considers the scarcity and timeliness of segments; 2) we formulate the push scheduling problem to be a global optimization problem and decompose it into local optimization problems on individual peers; 3) we introduce some rules to transform the local optimization problem into a classical min-cost optimization problem for solving it; 4) We combine the pull strategy with the push strategy and systematically realize our scheduling algorithm. Simulation results demonstrate that decode delay, decode ratio and redundant fraction of the P2P streaming system with our algorithm can be significantly improved, without losing throughput and increasing overhead.

  12. Neural network river forecasting through baseflow separation and binary-coded swarm optimization

    NASA Astrophysics Data System (ADS)

    Taormina, Riccardo; Chau, Kwok-Wing; Sivakumar, Bellie

    2015-10-01

    The inclusion of expert knowledge in data-driven streamflow modeling is expected to yield more accurate estimates of river quantities. Modular models (MMs) designed to work on different parts of the hydrograph are preferred ways to implement such approach. Previous studies have suggested that better predictions of total streamflow could be obtained via modular Artificial Neural Networks (ANNs) trained to perform an implicit baseflow separation. These MMs fit separately the baseflow and excess flow components as produced by a digital filter, and reconstruct the total flow by adding these two signals at the output. The optimization of the filter parameters and ANN architectures is carried out through global search techniques. Despite the favorable premises, the real effectiveness of such MMs has been tested only on a few case studies, and the quality of the baseflow separation they perform has never been thoroughly assessed. In this work, we compare the performance of MM against global models (GMs) for nine different gaging stations in the northern United States. Binary-coded swarm optimization is employed for the identification of filter parameters and model structure, while Extreme Learning Machines, instead of ANN, are used to drastically reduce the large computational times required to perform the experiments. The results show that there is no evidence that MM outperform global GM for predicting the total flow. In addition, the baseflow produced by the MM largely underestimates the actual baseflow component expected for most of the considered gages. This occurs because the values of the filter parameters maximizing overall accuracy do not reflect the geological characteristics of the river basins. The results indeed show that setting the filter parameters according to expert knowledge results in accurate baseflow separation but lower accuracy of total flow predictions, suggesting that these two objectives are intrinsically conflicting rather than compatible.

  13. Dose optimization with first-order total-variation minimization for dense angularly sampled and sparse intensity modulated radiation therapy (DASSIM-RT)

    SciTech Connect

    Kim, Hojin; Li Ruijiang; Lee, Rena; Goldstein, Thomas; Boyd, Stephen; Candes, Emmanuel; Xing Lei

    2012-07-15

    Purpose: A new treatment scheme coined as dense angularly sampled and sparse intensity modulated radiation therapy (DASSIM-RT) has recently been proposed to bridge the gap between IMRT and VMAT. By increasing the angular sampling of radiation beams while eliminating dispensable segments of the incident fields, DASSIM-RT is capable of providing improved conformity in dose distributions while maintaining high delivery efficiency. The fact that DASSIM-RT utilizes a large number of incident beams represents a major computational challenge for the clinical applications of this powerful treatment scheme. The purpose of this work is to provide a practical solution to the DASSIM-RT inverse planning problem. Methods: The inverse planning problem is formulated as a fluence-map optimization problem with total-variation (TV) minimization. A newly released L1-solver, template for first-order conic solver (TFOCS), was adopted in this work. TFOCS achieves faster convergence with less memory usage as compared with conventional quadratic programming (QP) for the TV form through the effective use of conic forms, dual-variable updates, and optimal first-order approaches. As such, it is tailored to specifically address the computational challenges of large-scale optimization in DASSIM-RT inverse planning. Two clinical cases (a prostate and a head and neck case) are used to evaluate the effectiveness and efficiency of the proposed planning technique. DASSIM-RT plans with 15 and 30 beams are compared with conventional IMRT plans with 7 beams in terms of plan quality and delivery efficiency, which are quantified by conformation number (CN), the total number of segments and modulation index, respectively. For optimization efficiency, the QP-based approach was compared with the proposed algorithm for the DASSIM-RT plans with 15 beams for both cases. Results: Plan quality improves with an increasing number of incident beams, while the total number of segments is maintained to be about the

  14. Code to Optimize Load Sharing of Split-Torque Transmissions Applied to the Comanche Helicopter

    NASA Technical Reports Server (NTRS)

    1995-01-01

    Most helicopters now in service have a transmission with a planetary design. Studies have shown that some helicopters would be lighter and more reliable if they had a transmission with a split-torque design instead. However, a split-torque design has never been used by a U.S. helicopter manufacturer because there has been no proven method to ensure equal sharing of the load among the multiple load paths. The Sikorsky/Boeing team has chosen to use a split-torque transmission for the U.S. Army's Comanche helicopter, and Sikorsky Aircraft is designing and manufacturing the transmission. To help reduce the technical risk of fielding this helicopter, NASA and the Army have done the research jointly in cooperation with Sikorsky Aircraft. A theory was developed that equal load sharing could be achieved by proper configuration of the geartrain, and a computer code was completed in-house at the NASA Lewis Research Center to calculate this optimal configuration.

  15. SU-E-T-254: Optimization of GATE and PHITS Monte Carlo Code Parameters for Uniform Scanning Proton Beam Based On Simulation with FLUKA General-Purpose Code

    SciTech Connect

    Kurosu, K; Takashina, M; Koizumi, M; Das, I; Moskvin, V

    2014-06-01

    Purpose: Monte Carlo codes are becoming important tools for proton beam dosimetry. However, the relationships between the customizing parameters and percentage depth dose (PDD) of GATE and PHITS codes have not been reported which are studied for PDD and proton range compared to the FLUKA code and the experimental data. Methods: The beam delivery system of the Indiana University Health Proton Therapy Center was modeled for the uniform scanning beam in FLUKA and transferred identically into GATE and PHITS. This computational model was built from the blue print and validated with the commissioning data. Three parameters evaluated are the maximum step size, cut off energy and physical and transport model. The dependence of the PDDs on the customizing parameters was compared with the published results of previous studies. Results: The optimal parameters for the simulation of the whole beam delivery system were defined by referring to the calculation results obtained with each parameter. Although the PDDs from FLUKA and the experimental data show a good agreement, those of GATE and PHITS obtained with our optimal parameters show a minor discrepancy. The measured proton range R90 was 269.37 mm, compared to the calculated range of 269.63 mm, 268.96 mm, and 270.85 mm with FLUKA, GATE and PHITS, respectively. Conclusion: We evaluated the dependence of the results for PDDs obtained with GATE and PHITS Monte Carlo generalpurpose codes on the customizing parameters by using the whole computational model of the treatment nozzle. The optimal parameters for the simulation were then defined by referring to the calculation results. The physical model, particle transport mechanics and the different geometrybased descriptions need accurate customization in three simulation codes to agree with experimental data for artifact-free Monte Carlo simulation. This study was supported by Grants-in Aid for Cancer Research (H22-3rd Term Cancer Control-General-043) from the Ministry of Health

  16. The neural code for auditory space depends on sound frequency and head size in an optimal manner.

    PubMed

    Harper, Nicol S; Scott, Brian H; Semple, Malcolm N; McAlpine, David

    2014-01-01

    A major cue to the location of a sound source is the interaural time difference (ITD)-the difference in sound arrival time at the two ears. The neural representation of this auditory cue is unresolved. The classic model of ITD coding, dominant for a half-century, posits that the distribution of best ITDs (the ITD evoking a neuron's maximal response) is unimodal and largely within the range of ITDs permitted by head-size. This is often interpreted as a place code for source location. An alternative model, based on neurophysiology in small mammals, posits a bimodal distribution of best ITDs with exquisite sensitivity to ITDs generated by means of relative firing rates between the distributions. Recently, an optimal-coding model was proposed, unifying the disparate features of these two models under the framework of efficient coding by neural populations. The optimal-coding model predicts that distributions of best ITDs depend on head size and sound frequency: for high frequencies and large heads it resembles the classic model, for low frequencies and small head sizes it resembles the bimodal model. The optimal-coding model makes key, yet unobserved, predictions: for many species, including humans, both forms of neural representation are employed, depending on sound frequency. Furthermore, novel representations are predicted for intermediate frequencies. Here, we examine these predictions in neurophysiological data from five mammalian species: macaque, guinea pig, cat, gerbil and kangaroo rat. We present the first evidence supporting these untested predictions, and demonstrate that different representations appear to be employed at different sound frequencies in the same species. PMID:25372405

  17. The Neural Code for Auditory Space Depends on Sound Frequency and Head Size in an Optimal Manner

    PubMed Central

    Harper, Nicol S.; Scott, Brian H.; Semple, Malcolm N.; McAlpine, David

    2014-01-01

    A major cue to the location of a sound source is the interaural time difference (ITD)–the difference in sound arrival time at the two ears. The neural representation of this auditory cue is unresolved. The classic model of ITD coding, dominant for a half-century, posits that the distribution of best ITDs (the ITD evoking a neuron’s maximal response) is unimodal and largely within the range of ITDs permitted by head-size. This is often interpreted as a place code for source location. An alternative model, based on neurophysiology in small mammals, posits a bimodal distribution of best ITDs with exquisite sensitivity to ITDs generated by means of relative firing rates between the distributions. Recently, an optimal-coding model was proposed, unifying the disparate features of these two models under the framework of efficient coding by neural populations. The optimal-coding model predicts that distributions of best ITDs depend on head size and sound frequency: for high frequencies and large heads it resembles the classic model, for low frequencies and small head sizes it resembles the bimodal model. The optimal-coding model makes key, yet unobserved, predictions: for many species, including humans, both forms of neural representation are employed, depending on sound frequency. Furthermore, novel representations are predicted for intermediate frequencies. Here, we examine these predictions in neurophysiological data from five mammalian species: macaque, guinea pig, cat, gerbil and kangaroo rat. We present the first evidence supporting these untested predictions, and demonstrate that different representations appear to be employed at different sound frequencies in the same species. PMID:25372405

  18. Insertion of operation-and-indicate instructions for optimized SIMD code

    DOEpatents

    Eichenberger, Alexander E; Gara, Alan; Gschwind, Michael K

    2013-06-04

    Mechanisms are provided for inserting indicated instructions for tracking and indicating exceptions in the execution of vectorized code. A portion of first code is received for compilation. The portion of first code is analyzed to identify non-speculative instructions performing designated non-speculative operations in the first code that are candidates for replacement by replacement operation-and-indicate instructions that perform the designated non-speculative operations and further perform an indication operation for indicating any exception conditions corresponding to special exception values present in vector register inputs to the replacement operation-and-indicate instructions. The replacement is performed and second code is generated based on the replacement of the at least one non-speculative instruction. The data processing system executing the compiled code is configured to store special exception values in vector output registers, in response to a speculative instruction generating an exception condition, without initiating exception handling.

  19. Experiences in the Performance Analysis and Optimization of a Deterministic Radiation Transport Code on the Cray SV1

    SciTech Connect

    Peter Cebull

    2004-05-01

    The Attila radiation transport code, which solves the Boltzmann neutron transport equation on three-dimensional unstructured tetrahedral meshes, was ported to a Cray SV1. Cray's performance analysis tools pointed to two subroutines that together accounted for 80%-90% of the total CPU time. Source code modifications were performed to enable vectorization of the most significant loops, to correct unfavorable strides through memory, and to replace a conjugate gradient solver subroutine with a call to the Cray Scientific Library. These optimizations resulted in a speedup of 7.79 for the INEEL's largest ATR model. Parallel scalability of the OpenMP version of the code is also discussed, and timing results are given for other non-vector platforms.

  20. Code-Switching and the Optimal Grammar of Bilingual Language Use

    ERIC Educational Resources Information Center

    Bhatt, Rakesh M.; Bolonyai, Agnes

    2011-01-01

    In this article, we provide a framework of bilingual grammar that offers a theoretical understanding of the socio-cognitive bases of code-switching in terms of five general principles that, individually or through interaction with each other, explain how and why specific instances of code-switching arise. We provide cross-linguistic empirical…

  1. Anode optimization for miniature electronic brachytherapy X-ray sources using Monte Carlo and computational fluid dynamic codes.

    PubMed

    Khajeh, Masoud; Safigholi, Habib

    2016-03-01

    A miniature X-ray source has been optimized for electronic brachytherapy. The cooling fluid for this device is water. Unlike the radionuclide brachytherapy sources, this source is able to operate at variable voltages and currents to match the dose with the tumor depth. First, Monte Carlo (MC) optimization was performed on the tungsten target-buffer thickness layers versus energy such that the minimum X-ray attenuation occurred. Second optimization was done on the selection of the anode shape based on the Monte Carlo in water TG-43U1 anisotropy function. This optimization was carried out to get the dose anisotropy functions closer to unity at any angle from 0° to 170°. Three anode shapes including cylindrical, spherical, and conical were considered. Moreover, by Computational Fluid Dynamic (CFD) code the optimal target-buffer shape and different nozzle shapes for electronic brachytherapy were evaluated. The characterization criteria of the CFD were the minimum temperature on the anode shape, cooling water, and pressure loss from inlet to outlet. The optimal anode was conical in shape with a conical nozzle. Finally, the TG-43U1 parameters of the optimal source were compared with the literature. PMID:26966563

  2. Anode optimization for miniature electronic brachytherapy X-ray sources using Monte Carlo and computational fluid dynamic codes

    PubMed Central

    Khajeh, Masoud; Safigholi, Habib

    2015-01-01

    A miniature X-ray source has been optimized for electronic brachytherapy. The cooling fluid for this device is water. Unlike the radionuclide brachytherapy sources, this source is able to operate at variable voltages and currents to match the dose with the tumor depth. First, Monte Carlo (MC) optimization was performed on the tungsten target-buffer thickness layers versus energy such that the minimum X-ray attenuation occurred. Second optimization was done on the selection of the anode shape based on the Monte Carlo in water TG-43U1 anisotropy function. This optimization was carried out to get the dose anisotropy functions closer to unity at any angle from 0° to 170°. Three anode shapes including cylindrical, spherical, and conical were considered. Moreover, by Computational Fluid Dynamic (CFD) code the optimal target-buffer shape and different nozzle shapes for electronic brachytherapy were evaluated. The characterization criteria of the CFD were the minimum temperature on the anode shape, cooling water, and pressure loss from inlet to outlet. The optimal anode was conical in shape with a conical nozzle. Finally, the TG-43U1 parameters of the optimal source were compared with the literature. PMID:26966563

  3. Approaches in highly parameterized inversion - PEST++, a Parameter ESTimation code optimized for large environmental models

    USGS Publications Warehouse

    Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.

    2012-01-01

    An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.

  4. GPU-optimized Code for Long-term Simulations of Beam-beam Effects in Colliders

    SciTech Connect

    Roblin, Yves; Morozov, Vasiliy; Terzic, Balsa; Aturban, Mohamed A.; Ranjan, D.; Zubair, Mohammed

    2013-06-01

    We report on the development of the new code for long-term simulation of beam-beam effects in particle colliders. The underlying physical model relies on a matrix-based arbitrary-order symplectic particle tracking for beam transport and the Bassetti-Erskine approximation for beam-beam interaction. The computations are accelerated through a parallel implementation on a hybrid GPU/CPU platform. With the new code, a previously computationally prohibitive long-term simulations become tractable. We use the new code to model the proposed medium-energy electron-ion collider (MEIC) at Jefferson Lab.

  5. Optimization of WDM lightwave systems (BAC) design using error control coding

    NASA Astrophysics Data System (ADS)

    Mruthyunjaya, H. S.; Umesh, G.; Sathish Kumar, M.

    2007-04-01

    In a binary asymmetric channel (BAC) it may be necessary to correct only those errors which result from incorrect transmission of one of the two code elements. In optical fiber multichannel systems, the optical amplifiers are critical components and amplified spontaneous emission noise in the optical amplifiers is the major source of noise in it. The property of erbium doped fiber amplifier is nearly ideal for application in lightwave long haul transmission. We investigate performance of error correcting codes in such systems in presence of stimulated Raman scattering and amplified spontaneous emission noise with asymmetric channel statistics. Performance of some best known concatenated coding schemes is reported.

  6. Optimization of GATE and PHITS Monte Carlo code parameters for spot scanning proton beam based on simulation with FLUKA general-purpose code

    NASA Astrophysics Data System (ADS)

    Kurosu, Keita; Das, Indra J.; Moskvin, Vadim P.

    2016-01-01

    Spot scanning, owing to its superior dose-shaping capability, provides unsurpassed dose conformity, in particular for complex targets. However, the robustness of the delivered dose distribution and prescription has to be verified. Monte Carlo (MC) simulation has the potential to generate significant advantages for high-precise particle therapy, especially for medium containing inhomogeneities. However, the inherent choice of computational parameters in MC simulation codes of GATE, PHITS and FLUKA that is observed for uniform scanning proton beam needs to be evaluated. This means that the relationship between the effect of input parameters and the calculation results should be carefully scrutinized. The objective of this study was, therefore, to determine the optimal parameters for the spot scanning proton beam for both GATE and PHITS codes by using data from FLUKA simulation as a reference. The proton beam scanning system of the Indiana University Health Proton Therapy Center was modeled in FLUKA, and the geometry was subsequently and identically transferred to GATE and PHITS. Although the beam transport is managed by spot scanning system, the spot location is always set at the center of a water phantom of 600 × 600 × 300 mm3, which is placed after the treatment nozzle. The percentage depth dose (PDD) is computed along the central axis using 0.5 × 0.5 × 0.5 mm3 voxels in the water phantom. The PDDs and the proton ranges obtained with several computational parameters are then compared to those of FLUKA, and optimal parameters are determined from the accuracy of the proton range, suppressed dose deviation, and computational time minimization. Our results indicate that the optimized parameters are different from those for uniform scanning, suggesting that the gold standard for setting computational parameters for any proton therapy application cannot be determined consistently since the impact of setting parameters depends on the proton irradiation technique. We

  7. [Non elective cesarean section: use of a color code to optimize management of obstetric emergencies].

    PubMed

    Rudigoz, René-Charles; Huissoud, Cyril; Delecour, Lisa; Thevenet, Simone; Dupont, Corinne

    2014-06-01

    The medical team of the Croix Rousse teaching hospital maternity unit has developed, over the last ten years, a set of procedures designed to respond to various emergency situations necessitating Caesarean section. Using the Lucas classification, we have defined as precisely as possible the degree of urgency of Caesarian sections. We have established specific protocols for the implementation of urgent and very urgent Caesarean section and have chosen a simple means to convey the degree of urgency to all team members, namely a color code system (red, orange and green). We have set time goals from decision to delivery: 15 minutes for the red code and 30 minutes for the orange code. The results seem very positive: The frequency of urgent and very urgent Caesareans has fallen over time, from 6.1 % to 1.6% in 2013. The average time from decision to delivery is 11 minutes for code red Caesareans and 21 minutes for code orange Caesareans. These time goals are now achieved in 95% of cases. Organizational and anesthetic difficulties are the main causes of delays. The indications for red and orange code Caesarians are appropriate more than two times out of three. Perinatal outcomes are generally favorable, code red Caesarians being life-saving in 15% of cases. No increase in maternal complications has been observed. In sum: Each obstetric department should have its own protocols for handling urgent and very urgent Caesarean sections. Continuous monitoring of their implementation, relevance and results should be conducted Management of extreme urgency must be integrated into the management of patients with identified risks (scarred uterus and twin pregnancies for example), and also in structures without medical facilities (birthing centers). Obstetric teams must keep in mind that implementation of these protocols in no way dispenses with close monitoring of labour. PMID:26983190

  8. Lifting scheme-based method for joint coding 3D stereo digital cinema with luminace correction and optimized prediction

    NASA Astrophysics Data System (ADS)

    Darazi, R.; Gouze, A.; Macq, B.

    2009-01-01

    Reproducing a natural and real scene as we see in the real world everyday is becoming more and more popular. Stereoscopic and multi-view techniques are used for this end. However due to the fact that more information are displayed requires supporting technologies such as digital compression to ensure the storage and transmission of the sequences. In this paper, a new scheme for stereo image coding is proposed. The original left and right images are jointly coded. The main idea is to optimally exploit the existing correlation between the two images. This is done by the design of an efficient transform that reduces the existing redundancy in the stereo image pair. This approach was inspired by Lifting Scheme (LS). The novelty in our work is that the prediction step is been replaced by an hybrid step that consists in disparity compensation followed by luminance correction and an optimized prediction step. The proposed scheme can be used for lossless and for lossy coding. Experimental results show improvement in terms of performance and complexity compared to recently proposed methods.

  9. DENSE MEDIUM CYCLONE OPTIMIZATON

    SciTech Connect

    Gerald H. Luttrell; Chris J. Barbee; Peter J. Bethell; Chris J. Wood

    2005-06-30

    Dense medium cyclones (DMCs) are known to be efficient, high-tonnage devices suitable for upgrading particles in the 50 to 0.5 mm size range. This versatile separator, which uses centrifugal forces to enhance the separation of fine particles that cannot be upgraded in static dense medium separators, can be found in most modern coal plants and in a variety of mineral plants treating iron ore, dolomite, diamonds, potash and lead-zinc ores. Due to the high tonnage, a small increase in DMC efficiency can have a large impact on plant profitability. Unfortunately, the knowledge base required to properly design and operate DMCs has been seriously eroded during the past several decades. In an attempt to correct this problem, a set of engineering tools have been developed to allow producers to improve the efficiency of their DMC circuits. These tools include (1) low-cost density tracers that can be used by plant operators to rapidly assess DMC performance, (2) mathematical process models that can be used to predict the influence of changes in operating and design variables on DMC performance, and (3) an expert advisor system that provides plant operators with a user-friendly interface for evaluating, optimizing and trouble-shooting DMC circuits. The field data required to develop these tools was collected by conducting detailed sampling and evaluation programs at several industrial plant sites. These data were used to demonstrate the technical, economic and environmental benefits that can be realized through the application of these engineering tools.

  10. A study of the optimization method used in the NAVY/NASA gas turbine engine computer code

    NASA Technical Reports Server (NTRS)

    Horsewood, J. L.; Pines, S.

    1977-01-01

    Sources of numerical noise affecting the convergence properties of the Powell's Principal Axis Method of Optimization in the NAVY/NASA gas turbine engine computer code were investigated. The principal noise source discovered resulted from loose input tolerances used in terminating iterations performed in subroutine CALCFX to satisfy specified control functions. A minor source of noise was found to be introduced by an insufficient number of digits in stored coefficients used by subroutine THERM in polynomial expressions of thermodynamic properties. Tabular results of several computer runs are presented to show the effects on program performance of selective corrective actions taken to reduce noise.

  11. Program user's manual for optimizing the design of a liquid or gaseous propellant rocket engine with the automated combustor design code AUTOCOM

    NASA Technical Reports Server (NTRS)

    Reichel, R. H.; Hague, D. S.; Jones, R. T.; Glatt, C. R.

    1973-01-01

    This computer program manual describes in two parts the automated combustor design optimization code AUTOCOM. The program code is written in the FORTRAN 4 language. The input data setup and the program outputs are described, and a sample engine case is discussed. The program structure and programming techniques are also described, along with AUTOCOM program analysis.

  12. Decoding and optimized implementation of SECDED codes over GF(q)

    DOEpatents

    Ward, H. Lee; Ganti, Anand; Resnick, David R

    2013-10-22

    A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.

  13. Decoding and optimized implementation of SECDED codes over GF(q)

    DOEpatents

    Ward, H Lee; Ganti, Anand; Resnick, David R

    2014-11-18

    A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.

  14. Design, decoding and optimized implementation of SECDED codes over GF(q)

    DOEpatents

    Ward, H Lee; Ganti, Anand; Resnick, David R

    2014-06-17

    A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.

  15. Optimization of a photoneutron source based on 10 MeV electron beam using Geant4 Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Askri, Boubaker

    2015-10-01

    Geant4 Monte Carlo code has been used to conceive and optimize a simple and compact neutron source based on a 10 MeV electron beam impinging on a tungsten target adjoined to a beryllium target. For this purpose, a precise photonuclear reaction cross-section model issued from the International Atomic Energy Agency (IAEA) database was linked to Geant4 to accurately simulate the interaction of low energy bremsstrahlung photons with beryllium material. A benchmark test showed that a good agreement was achieved when comparing the emitted neutron flux spectra predicted by Geant4 and Fluka codes for a beryllium cylinder bombarded with a 5 MeV photon beam. The source optimization was achieved through a two stage Monte Carlo simulation. In the first stage, the distributions of the seven phase space coordinates of the bremsstrahlung photons at the boundaries of the tungsten target were determined. In the second stage events corresponding to photons emitted according to these distributions were tracked. A neutron yield of 4.8 × 1010 neutrons/mA/s was obtained at 20 cm from the beryllium target. A thermal neutron yield of 1.5 × 109 neutrons/mA/s was obtained after introducing a spherical shell of polyethylene as a neutron moderator.

  16. Integer-linear-programing optimization in scalable video multicast with adaptive modulation and coding in wireless networks.

    PubMed

    Lee, Dongyul; Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm. PMID:25276862

  17. Integer-Linear-Programing Optimization in Scalable Video Multicast with Adaptive Modulation and Coding in Wireless Networks

    PubMed Central

    Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm. PMID:25276862

  18. Image gathering, coding, and processing: End-to-end optimization for efficient and robust acquisition of visual information

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; Fales, Carl L.

    1990-01-01

    Researchers are concerned with the end-to-end performance of image gathering, coding, and processing. The applications range from high-resolution television to vision-based robotics, wherever the resolution, efficiency and robustness of visual information acquisition and processing are critical. For the presentation at this workshop, it is convenient to divide research activities into the following two overlapping areas: The first is the development of focal-plane processing techniques and technology to effectively combine image gathering with coding, with an emphasis on low-level vision processing akin to the retinal processing in human vision. The approach includes the familiar Laplacian pyramid, the new intensity-dependent spatial summation, and parallel sensing/processing networks. Three-dimensional image gathering is attained by combining laser ranging with sensor-array imaging. The second is the rigorous extension of information theory and optimal filtering to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing.

  19. Performance of an Optimized Eta Model Code on the Cray T3E and a Network of PCs

    NASA Technical Reports Server (NTRS)

    Kouatchou, Jules; Rancic, Miodrag; Geiger, Jim

    2000-01-01

    In the year 2001, NASA will launch the satellite TRIANA that will be the first Earth observing mission to provide a continuous, full disk view of the sunlit Earth. As a part of the HPCC Program at NASA GSFC, we have started a project whose objectives are to develop and implement a 3D cloud data assimilation system, by combining TRIANA measurements with model simulation, and to produce accurate statistics of global cloud coverage as an important element of the Earth's climate. For simulation of the atmosphere within this project we are using the NCEP/NOAA operational Eta model. In order to compare TRIANA and the Eta model data on approximately the same grid without significant downscaling, the Eta model will be integrated at a resolution of about 15 km. The integration domain (from -70 to +70 deg in latitude and 150 deg in longitude) will cover most of the sunlit Earth disc and will continuously rotate around the globe following TRIANA. The cloud data assimilation is supposed to run and produce 3D clouds on a near real-time basis. Such a numerical setup and integration design is very ambitious and computationally demanding. Thus, though the Eta model code has been very carefully developed and its computational efficiency has been systematically polished during the years of operational implementation at NCEP, the current MPI version may still have problems with memory and efficiency for the TRIANA simulations. Within this work, we optimize a parallel version of the Eta model code on a Cray T3E and a network of PCs (theHIVE) in order to improve its overall efficiency. Our optimization procedure consists of introducing dynamically allocated arrays to reduce the size of static memory, and optimizing on a single processor by splitting loops to limit the number of streams. All the presented results are derived using an integration domain centered at the equator, with a size of 60 x 60 deg, and with horizontal resolutions of 1/2 and 1/3 deg, respectively. In accompanying

  20. Optimized FIR filters for digital pulse compression of biphase codes with low sidelobes

    NASA Astrophysics Data System (ADS)

    Sanal, M.; Kuloor, R.; Sagayaraj, M. J.

    In miniaturized radars where power, real estate, speed and low cost are tight constraints and Doppler tolerance is not a major concern biphase codes are popular and FIR filter is used for digital pulse compression (DPC) implementation to achieve required range resolution. Disadvantage of low peak to sidelobe ratio (PSR) of biphase codes can be overcome by linear programming for either single stage mismatched filter or two stage approach i.e. matched filter followed by sidelobe suppression filter (SSF) filter. Linear programming (LP) calls for longer filter lengths to obtain desirable PSR. Longer the filter length greater will be the number of multipliers, hence more will be the requirement of logic resources used in the FPGAs and many time becomes design challenge for system on chip (SoC) requirement. This requirement of multipliers can be brought down by clustering the tap weights of the filter by kmeans clustering algorithm at the cost of few dB deterioration in PSR. The cluster centroid as tap weight reduces logic used in FPGA for FIR filters to a great extent by reducing number of weight multipliers. Since k-means clustering is an iterative algorithm, centroid for weights cluster is different in different iterations and causes different clusters. This causes difference in clustering of weights and sometimes even it may happen that lesser number of multiplier and lesser length of filter provide better PSR.

  1. MagRad: A code to optimize the operation of superconducting magnets in a radiation environment

    SciTech Connect

    Yeaw, C.T.

    1995-12-31

    A powerful computational tool, called MagRad, has been developed which optimizes magnet design for operation in radiation fields. Specifically, MagRad has been used for the analysis and design modification of the cable-in-conduit conductors of the TF magnet systems in fusion reactor designs. Since the TF magnets must operate in a radiation environment which damages the material components of the conductor and degrades their performance, the optimization of conductor design must account not only for start-up magnet performance, but also shut-down performance. The degradation in performance consists primarily of three effects: reduced stability margin of the conductor; a transition out of the well-cooled operating regime; and an increased maximum quench temperature attained in the conductor. Full analysis of the magnet performance over the lifetime of the reactor includes: radiation damage to the conductor, stability, protection, steady state heat removal, shielding effectiveness, optimal annealing schedules, and finally costing of the magnet and reactor. Free variables include primary and secondary conductor geometric and compositional parameters, as well as fusion reactor parameters. A means of dealing with the radiation damage to the conductor, namely high temperature superconductor anneals, is proposed, examined, and demonstrated to be both technically feasible and cost effective. Additionally, two relevant reactor designs (ITER CDA and ARIES-II/IV) have been analyzed. Upon addition of pure copper strands to the cable, the ITER CDA TF magnet design was found to be marginally acceptable, although much room for both performance improvement and cost reduction exists. A cost reduction of 10-15% of the capital cost of the reactor can be achieved by adopting a suitable superconductor annealing schedule. In both of these reactor analyses, the performance predictive capability of MagRad and its associated costing techniques have been demonstrated.

  2. Combined optimal quantization and lossless coding of digital holograms of three-dimensional objects

    NASA Astrophysics Data System (ADS)

    Shortt, Alison E.; Naughton, Thomas J.; Javidi, Bahram

    2006-10-01

    Digital holography is an inherently three-dimensional (3D) technique for the capture of real-world objects. Many existing 3D imaging and processing techniques are based on the explicit combination of several 2D perspectives (or light stripes, etc.) through digital image processing. The advantage of recording a hologram is that multiple 2D perspectives can be optically combined in parallel, and in a constant number of steps independent of the hologram size. Although holography and its capabilities have been known for many decades, it is only very recently that digital holography has been practically investigated due to the recent development of megapixel digital sensors with sufficient spatial resolution and dynamic range. The applications of digital holography could include 3D television, virtual reality, and medical imaging. If these applications are realized, compression standards will have to be defined. We outline the techniques that have been proposed to date for the compression of digital hologram data and show that they are comparable to the performance of what in communication theory is known as optimal signal quantization. We adapt the optimal signal quantization technique to complex-valued 2D signals. The technique relies on knowledge of the histograms of real and imaginary values in the digital holograms. Our digital holograms of 3D objects are captured using phase-shift interferometry. We complete the compression procedure by applying lossless techniques to the quantized holographic pixels.

  3. Method for dense packing discovery

    NASA Astrophysics Data System (ADS)

    Kallus, Yoav; Elser, Veit; Gravel, Simon

    2010-11-01

    The problem of packing a system of particles as densely as possible is foundational in the field of discrete geometry and is a powerful model in the material and biological sciences. As packing problems retreat from the reach of solution by analytic constructions, the importance of an efficient numerical method for conducting de novo (from-scratch) searches for dense packings becomes crucial. In this paper, we use the divide and concur framework to develop a general search method for the solution of periodic constraint problems, and we apply it to the discovery of dense periodic packings. An important feature of the method is the integration of the unit-cell parameters with the other packing variables in the definition of the configuration space. The method we present led to previously reported improvements in the densest-known tetrahedron packing. Here, we use the method to reproduce the densest-known lattice sphere packings and the best-known lattice kissing arrangements in up to 14 and 11 dimensions, respectively, providing numerical evidence for their optimality. For nonspherical particles, we report a dense packing of regular four-dimensional simplices with density ϕ=128/219≈0.5845 and with a similar structure to the densest-known tetrahedron packing.

  4. ROCOPT: A user friendly interactive code to optimize rocket structural components

    NASA Technical Reports Server (NTRS)

    Rule, William K.

    1989-01-01

    ROCOPT is a user-friendly, graphically-interfaced, microcomputer-based computer program (IBM compatible) that optimizes rocket components by minimizing the structural weight. The rocket components considered are ring stiffened truncated cones and cylinders. The applied loading is static, and can consist of any combination of internal or external pressure, axial force, bending moment, and torque. Stress margins are calculated by means of simple closed form strength of material type equations. Stability margins are determined by approximate, orthotropic-shell, closed-form equations. A modified form of Powell's method, in conjunction with a modified form of the external penalty method, is used to determine the minimum weight of the structure subject to stress and stability margin constraints, as well as user input constraints on the structural dimensions. The graphical interface guides the user through the required data prompts, explains program options and graphically displays results for easy interpretation.

  5. A comprehensive method for preliminary design optimization of axial gas turbine stages. II - Code verification

    NASA Technical Reports Server (NTRS)

    Jenkins, R. M.

    1983-01-01

    The present effort represents an extension of previous work wherein a calculation model for performing rapid pitchline optimization of axial gas turbine geometry, including blade profiles, is developed. The model requires no specification of geometric constraints. Output includes aerodynamic performance (adiabatic efficiency), hub-tip flow-path geometry, blade chords, and estimates of blade shape. Presented herein is a verification of the aerodynamic performance portion of the model, whereby detailed turbine test-rig data, including rig geometry, is input to the model to determine whether tested performance can be predicted. An array of seven (7) NASA single-stage axial gas turbine configurations is investigated, ranging in size from 0.6 kg/s to 63.8 kg/s mass flow and in specific work output from 153 J/g to 558 J/g at design (hot) conditions; stage loading factor ranges from 1.15 to 4.66.

  6. Optimized quadtree for Karhunen-Loeve transform in multispectral image coding.

    PubMed

    Lee, J

    1999-01-01

    A new multispectral image compression technique based on the Karhunen-Loeve transform (KLT) and the discrete cosine transform (DCT) is proposed. The quadtree for determining the transform block size and the quantizer for encoding the transform coefficients are jointly optimized in a rate-distortion sense. The problem is solved by a Lagrange multiplier approach. After a quadtree is determined by this approach, a one-dimensional (1-D) KLT is applied to the spectral axis for each block before the DCT is applied on the spatial domain. The eigenvectors of the autocovariance matrix, the quantization scale, and the quantized transform coefficients for each block are the output of the encoder. The overhead information required in this scheme is the bits for the quadtree, KLT, and quantizer representation. PMID:18262890

  7. Optimal coding-decoding for systems controlled via a communication channel

    NASA Astrophysics Data System (ADS)

    Yi-wei, Feng; Guo, Ge

    2013-12-01

    In this article, we study the problem of controlling plants over a signal-to-noise ratio (SNR) constrained communication channel. Different from previous research, this article emphasises the importance of the actual channel model and coder/decoder in the study of network performance. Our major objectives include coder/decoder design for an additive white Gaussian noise (AWGN) channel with both standard network configuration and Youla parameter network architecture. We find that the optimal coder and decoder can be realised for different network configuration. The results are useful in determining the minimum channel capacity needed in order to stabilise plants over communication channels. The coder/decoder obtained can be used to analyse the effect of uncertainty on the channel capacity. An illustrative example is provided to show the effectiveness of the results.

  8. Steps towards verification and validation of the Fetch code for Level 2 analysis, design, and optimization of aqueous homogeneous reactors

    SciTech Connect

    Nygaard, E. T.; Pain, C. C.; Eaton, M. D.; Gomes, J. L. M. A.; Goddard, A. J. H.; Gorman, G.; Tollit, B.; Buchan, A. G.; Cooling, C. M.; Angelo, P. L.

    2012-07-01

    Babcock and Wilcox Technical Services Group (B and W) has identified aqueous homogeneous reactors (AHRs) as a technology well suited to produce the medical isotope molybdenum 99 (Mo-99). AHRs have never been specifically designed or built for this specialized purpose. However, AHRs have a proven history of being safe research reactors. In fact, in 1958, AHRs had 'a longer history of operation than any other type of research reactor using enriched fuel' and had 'experimentally demonstrated to be among the safest of all various type of research reactor now in use [1].' While AHRs have been modeled effectively using simplified 'Level 1' tools, the complex interactions between fluids, neutronics, and solid structures are important (but not necessarily safety significant). These interactions require a 'Level 2' modeling tool. Imperial College London (ICL) has developed such a tool: Finite Element Transient Criticality (FETCH). FETCH couples the radiation transport code EVENT with the computational fluid dynamics code (Fluidity), the result is a code capable of modeling sub-critical, critical, and super-critical solutions in both two-and three-dimensions. Using FETCH, ICL researchers and B and W engineers have studied many fissioning solution systems include the Tokaimura criticality accident, the Y12 accident, SILENE, TRACY, and SUPO. These modeling efforts will ultimately be incorporated into FETCH'S extensive automated verification and validation (V and V) test suite expanding FETCH'S area of applicability to include all relevant physics associated with AHRs. These efforts parallel B and W's engineering effort to design and optimize an AHR to produce Mo99. (authors)

  9. Optimizing color fidelity for display devices using contour phase predictive coding for text, graphics, and video content

    NASA Astrophysics Data System (ADS)

    Lebowsky, Fritz

    2013-02-01

    High-end monitors and TVs based on LCD technology continue to increase their native display resolution to 4k2k and beyond. Subsequently, uncompressed pixel data transmission becomes costly when transmitting over cable or wireless communication channels. For motion video content, spatial preprocessing from YCbCr 444 to YCbCr 420 is widely accepted. However, due to spatial low pass filtering in horizontal and vertical direction, quality and readability of small text and graphics content is heavily compromised when color contrast is high in chrominance channels. On the other hand, straight forward YCbCr 444 compression based on mathematical error coding schemes quite often lacks optimal adaptation to visually significant image content. Therefore, we present the idea of detecting synthetic small text fonts and fine graphics and applying contour phase predictive coding for improved text and graphics rendering at the decoder side. Using a predictive parametric (text) contour model and transmitting correlated phase information in vector format across all three color channels combined with foreground/background color vectors of a local color map promises to overcome weaknesses in compression schemes that process luminance and chrominance channels separately. The residual error of the predictive model is being minimized more easily since the decoder is an integral part of the encoder. A comparative analysis based on some competitive solutions highlights the effectiveness of our approach, discusses current limitations with regard to high quality color rendering, and identifies remaining visual artifacts.

  10. Motion estimation optimization in a MPEG-1-like video coding scheme for low-bit-rate applications

    NASA Astrophysics Data System (ADS)

    Roser, Miguel; Villegas, Paulo

    1994-05-01

    In this paper we present a work based on a coding algorithm for visual information that follows the International Standard ISO-IEC IS 11172, `Coding of Moving Pictures and Associated Audio for Digital Storage Media up to about 1.5 Mbit/s', widely known as MPEG1. The main intention in the definition of the MPEG 1 standard was to provide a large degree of flexibility to be used in many different applications. The interest of this paper is to adapt the MPEG 1 scheme for low bitrate operation and optimize it for special situations, as for example, a talking head with low movement, which is a usual situation in videotelephony application. An adapted and compatible MPEG 1 scheme, previously developed, able to operate at px8 Kbit/s will be used in this work. Looking for a low complexity scheme and taking into account that the most expensive (from the point of view of consumed computer time) step in the scheme is the motion estimation process (almost 80% of the total computer time is spent on the ME), an improvement of the motion estimation module based on the use of a new search pattern is presented in this paper.

  11. [A comparison of the knockout efficiencies of two codon-optimized Cas9 coding sequences in zebrafish embryos].

    PubMed

    Fenghua, Zhang; Houpeng, Wang; Siyu, Huang; Feng, Xiong; Zuoyan, Zhu; Yonghua, Sun

    2016-02-01

    Recent years have witnessed the rapid development of the clustered regularly interspaced short palindromic repeats/CRISPR-associated protein(CRISPR/Cas9)system. In order to realize gene knockout with high efficiency and specificity in zebrafish, several labs have synthesized distinct Cas9 cDNA sequences which were cloned into different vectors. In this study, we chose two commonly used zebrafish-codon-optimized Cas9 coding sequences (zCas9_bz, zCas9_wc) from two different labs, and utilized them to knockout seven genes in zebrafish embryos, including the exogenous egfp and six endogenous genes (chd, hbegfa, th, eef1a1b, tyr and tcf7l1a). We compared the knockout efficiencies resulting from the two zCas9 coding sequences, by direct sequencing of PCR products, colony sequencing and phenotypic analysis. The results showed that the knockout efficiency of zCas9_wc was higher than that of zCas9_bz in all conditions. PMID:26907778

  12. Using Microsoft Excel as a pre-processor for CODE V optimization of air spaces when building camera lenses

    NASA Astrophysics Data System (ADS)

    Stephenson, Dave

    2013-09-01

    When building high-performance camera lenses, it is often preferable to tailor element-to-element air spaces instead of tightening the fabrication tolerances sufficiently so that random assembly is possible. A tailored air space solution is usually unique for each serial number camera lens and results in nearly nominal performance. When these air spaces are computed based on measured radii, thickness, and refractive indices, this can put a strain on the design engineering department to deal with all the data in a timely fashion. Excel† may be used by the assembly technician as a preprocessor tool to facilitate data entry and organization, and to perform the optimization using CODE V‡ (or equivalent) without any training or experience in using lens design software. This makes it unnecessary to involve design engineering for each lens serial number, sometimes waiting in their work queue. In addition, Excel can be programmed to run CODE V in such a way that discrete shim thicknesses result. This makes it possible for each tailored air space solution to be achieved using a finite number of shims that differ in thickness by a reasonable amount. It is generally not necessary to tailor the air spaces in each lens to the micron level to achieve nearly nominal performance.

  13. Laser-induced fusion in ultra-dense deuterium D(-1): Optimizing MeV particle emission by carrier material selection

    NASA Astrophysics Data System (ADS)

    Holmlid, Leif

    2013-02-01

    Power generation by laser-induced nuclear fusion in ultra-dense deuterium D(-1) requires that the carrier material interacts correctly with D(-1) prior to the laser pulse and also during the laser pulse. In previous studies, the interaction between the superfluid D(-1) layer and various carrier materials prior to the laser pulse has been investigated. It was shown that organic polymer materials do not give a condensed D(-1) layer. Metal surfaces carry thicker D(-1) layers useful for fusion. Here, the interaction between the carrier and the nuclear fusion process is investigated by observing the MeV particle emission (e.g. 14 MeV protons) using twelve different carrier materials and two different methods of detection. Several factors have been analyzed for the performance of the carrier materials: the hardness and the melting point of the material, and the chemical properties of the surface layer. The best performance is found for the high-melting metals Ti and Ta, but also Cu performs well as carrier despite its low melting point. The unexpectedly meager performance of Ni and Ir may be due to their catalytic activity towards hydrogen which may give atomic association to deuterium molecules at the low D2 pressure used.

  14. A four-column theory for the origin of the genetic code: tracing the evolutionary pathways that gave rise to an optimized code

    PubMed Central

    Higgs, Paul G

    2009-01-01

    Background The arrangement of the amino acids in the genetic code is such that neighbouring codons are assigned to amino acids with similar physical properties. Hence, the effects of translational error are minimized with respect to randomly reshuffled codes. Further inspection reveals that it is amino acids in the same column of the code (i.e. same second base) that are similar, whereas those in the same row show no particular similarity. We propose a 'four-column' theory for the origin of the code that explains how the action of selection during the build-up of the code leads to a final code that has the observed properties. Results The theory makes the following propositions. (i) The earliest amino acids in the code were those that are easiest to synthesize non-biologically, namely Gly, Ala, Asp, Glu and Val. (ii) These amino acids are assigned to codons with G at first position. Therefore the first code may have used only these codons. (iii) The code rapidly developed into a four-column code where all codons in the same column coded for the same amino acid: NUN = Val, NCN = Ala, NAN = Asp and/or Glu, and NGN = Gly. (iv) Later amino acids were added sequentially to the code by a process of subdivision of codon blocks in which a subset of the codons assigned to an early amino acid were reassigned to a later amino acid. (v) Later amino acids were added into positions formerly occupied by amino acids with similar properties because this can occur with minimal disruption to the proteins already encoded by the earlier code. As a result, the properties of the amino acids in the final code retain a four-column pattern that is a relic of the earliest stages of code evolution. Conclusion The driving force during this process is not the minimization of translational error, but positive selection for the increased diversity and functionality of the proteins that can be made with a larger amino acid alphabet. Nevertheless, the code that results is one in which translational

  15. Atoms in dense plasmas

    SciTech Connect

    More, R.M.

    1986-01-01

    Recent experiments with high-power pulsed lasers have strongly encouraged the development of improved theoretical understanding of highly charged ions in a dense plasma environment. This work examines the theory of dense plasmas with emphasis on general rules which govern matter at extreme high temperature and density. 106 refs., 23 figs.

  16. Kinetic Simulations of Dense Plasma Focus Breakdown

    NASA Astrophysics Data System (ADS)

    Schmidt, A.; Higginson, D. P.; Jiang, S.; Link, A.; Povilus, A.; Sears, J.; Bennett, N.; Rose, D. V.; Welch, D. R.

    2015-11-01

    A dense plasma focus (DPF) device is a type of plasma gun that drives current through a set of coaxial electrodes to assemble gas inside the device and then implode that gas on axis to form a Z-pinch. This implosion drives hydrodynamic and kinetic instabilities that generate strong electric fields, which produces a short intense pulse of x-rays, high-energy (>100 keV) electrons and ions, and (in deuterium gas) neutrons. A strong factor in pinch performance is the initial breakdown and ionization of the gas along the insulator surface separating the two electrodes. The smoothness and isotropy of this ionized sheath are imprinted on the current sheath that travels along the electrodes, thus making it an important portion of the DPF to both understand and optimize. Here we use kinetic simulations in the Particle-in-cell code LSP to model the breakdown. Simulations are initiated with neutral gas and the breakdown modeled self-consistently as driven by a charged capacitor system. We also investigate novel geometries for the insulator and electrodes to attempt to control the electric field profile. The initial ionization fraction of gas is explored computationally to gauge possible advantages of pre-ionization which could be created experimentally via lasers or a glow-discharge. Prepared by LLNL under Contract DE-AC52-07NA27344.

  17. Real-time photoacoustic and ultrasound dual-modality imaging system facilitated with graphics processing unit and code parallel optimization

    NASA Astrophysics Data System (ADS)

    Yuan, Jie; Xu, Guan; Yu, Yao; Zhou, Yu; Carson, Paul L.; Wang, Xueding; Liu, Xiaojun

    2013-08-01

    Photoacoustic tomography (PAT) offers structural and functional imaging of living biological tissue with highly sensitive optical absorption contrast and excellent spatial resolution comparable to medical ultrasound (US) imaging. We report the development of a fully integrated PAT and US dual-modality imaging system, which performs signal scanning, image reconstruction, and display for both photoacoustic (PA) and US imaging all in a truly real-time manner. The back-projection (BP) algorithm for PA image reconstruction is optimized to reduce the computational cost and facilitate parallel computation on a state of the art graphics processing unit (GPU) card. For the first time, PAT and US imaging of the same object can be conducted simultaneously and continuously, at a real-time frame rate, presently limited by the laser repetition rate of 10 Hz. Noninvasive PAT and US imaging of human peripheral joints in vivo were achieved, demonstrating the satisfactory image quality realized with this system. Another experiment, simultaneous PAT and US imaging of contrast agent flowing through an artificial vessel, was conducted to verify the performance of this system for imaging fast biological events. The GPU-based image reconstruction software code for this dual-modality system is open source and available for download from http://sourceforge.net/projects/patrealtime.

  18. A real-time photoacoustic and ultrasound dual-modality imaging system facilitated with GPU and code parallel optimization

    NASA Astrophysics Data System (ADS)

    Yuan, Jie; Xu, Guan; Yu, Yao; Zhou, Yu; Carson, Paul L.; Wang, Xueding; Liu, Xiaojun

    2014-03-01

    Photoacoustic tomography (PAT) offers structural and functional imaging of living biological tissue with highly sensitive optical absorption contrast and excellent spatial resolution comparable to medical ultrasound (US) imaging. We report the development of a fully integrated PAT and US dual-modality imaging system, which performs signal scanning, image reconstruction and display for both photoacoustic (PA) and US imaging all in a truly real-time manner. The backprojection (BP) algorithm for PA image reconstruction is optimized to reduce the computational cost and facilitate parallel computation on a state of the art graphics processing unit (GPU) card. For the first time, PAT and US imaging of the same object can be conducted simultaneously and continuously, at a real time frame rate, presently limited by the laser repetition rate of 10 Hz. Noninvasive PAT and US imaging of human peripheral joints in vivo were achieved, demonstrating the satisfactory image quality realized with this system. Another experiment, simultaneous PAT and US imaging of contrast agent flowing through an artificial vessel was conducted to verify the performance of this system for imaging fast biological events. The GPU based image reconstruction software code for this dual-modality system is open source and available for download from http://sourceforge.net/projects/pat realtime .

  19. User's guide for the BNW-III optimization code for modular dry/wet-cooled power plants

    SciTech Connect

    Braun, D.J.; Faletti, D.W.

    1984-09-01

    This user's guide describes BNW-III, a computer code developed by the Pacific Northwest Laboratory (PNL) as part of the Dry Cooling Enhancement Program sponsored by the US Department of Energy (DOE). The BNW-III code models a modular dry/wet cooling system for a nuclear or fossil fuel power plant. The purpose of this guide is to give the code user a brief description of what the BNW-III code is and how to use it. It describes the cooling system being modeled and the various models used. A detailed description of code input and code output is also included. The BNW-III code was developed to analyze a specific cooling system layout. However, there is a large degree of freedom in the type of cooling modules that can be selected and in the performance of those modules. The costs of the modules are input to the code, giving the user a great deal of flexibility.

  20. Optimization of Grit-Blasting Process Parameters for Production of Dense Coatings on Open Pores Metallic Foam Substrates Using Statistical Methods

    NASA Astrophysics Data System (ADS)

    Salavati, S.; Coyle, T. W.; Mostaghimi, J.

    2015-10-01

    Open pore metallic foam core sandwich panels prepared by thermal spraying of a coating on the foam structures can be used as high-efficiency heat transfer devices due to their high surface area to volume ratio. The structural, mechanical, and physical properties of thermally sprayed skins play a significant role in the performance of the related devices. These properties are mainly controlled by the porosity content, oxide content, adhesion strength, and stiffness of the deposited coating. In this study, the effects of grit-blasting process parameters on the characteristics of the temporary surface created on the metallic foam substrate and on the twin-wire arc-sprayed alloy 625 coating subsequently deposited on the foam were investigated through response surface methodology. Characterization of the prepared surface and sprayed coating was conducted by scanning electron microscopy, roughness measurements, and adhesion testing. Using statistical design of experiments, response surface method, a model was developed to predict the effect of grit-blasting parameters on the surface roughness of the prepared foam and also the porosity content of the sprayed coating. The coating porosity and adhesion strength were found to be determined by the substrate surface roughness, which could be controlled by grit-blasting parameters. Optimization of the grit-blasting parameters was conducted using the fitted model to minimize the porosity content of the coating while maintaining a high adhesion strength.

  1. Dense high temperature ceramic oxide superconductors

    DOEpatents

    Landingham, R.L.

    1993-10-12

    Dense superconducting ceramic oxide articles of manufacture and methods for producing these articles are described. Generally these articles are produced by first processing these superconducting oxides by ceramic processing techniques to optimize materials properties, followed by reestablishing the superconducting state in a desired portion of the ceramic oxide composite.

  2. Dense high temperature ceramic oxide superconductors

    DOEpatents

    Landingham, Richard L.

    1993-01-01

    Dense superconducting ceramic oxide articles of manufacture and methods for producing these articles are described. Generally these articles are produced by first processing these superconducting oxides by ceramic processing techniques to optimize materials properties, followed by reestablishing the superconducting state in a desired portion of the ceramic oxide composite.

  3. Optimized and secure technique for multiplexing QR code images of single characters: application to noiseless messages retrieval

    NASA Astrophysics Data System (ADS)

    Trejos, Sorayda; Fredy Barrera, John; Torroba, Roberto

    2015-08-01

    We present for the first time an optical encrypting-decrypting protocol for recovering messages without speckle noise. This is a digital holographic technique using a 2f scheme to process QR codes entries. In the procedure, letters used to compose eventual messages are individually converted into a QR code, and then each QR code is divided into portions. Through a holographic technique, we store each processed portion. After filtering and repositioning, we add all processed data to create a single pack, thus simplifying the handling and recovery of multiple QR code images, representing the first multiplexing procedure applied to processed QR codes. All QR codes are recovered in a single step and in the same plane, showing neither cross-talk nor noise problems as in other methods. Experiments have been conducted using an interferometric configuration and comparisons between unprocessed and recovered QR codes have been performed, showing differences between them due to the involved processing. Recovered QR codes can be successfully scanned, thanks to their noise tolerance. Finally, the appropriate sequence in the scanning of the recovered QR codes brings a noiseless retrieved message. Additionally, to procure maximum security, the multiplexed pack could be multiplied by a digital diffuser as to encrypt it. The encrypted pack is easily decoded by multiplying the multiplexing with the complex conjugate of the diffuser. As it is a digital operation, no noise is added. Therefore, this technique is threefold robust, involving multiplexing, encryption, and the need of a sequence to retrieve the outcome.

  4. Earthquake source inversion with dense networks

    NASA Astrophysics Data System (ADS)

    Somala, S.; Ampuero, J. P.; Lapusta, N.

    2012-12-01

    Inversions of earthquake source slip from the recorded ground motions typically impose a number of restrictions on the source parameterization, which are needed to stabilize the inverse problem with sparse data. Such restrictions may include smoothing, causality considerations, predetermined shapes of the local source-time function, and constant rupture speed. The goal of our work is to understand whether the inversion results could be substantially improved by the availability of much denser sensor networks than currently available. The best regional networks have sensor spacing in the tens of kilometers range, much larger than the wavelengths relevant to key aspects of earthquake physics. Novel approaches to providing orders-of-magnitude denser sensing include low-cost sensors (Community Seismic Network) and space-based optical imaging (Geostationary Optical Seismometer). However, in both cases, the density of sensors comes at the expense of accuracy. Inversions that involve large number of sensors are intractable with the current source inversion codes. Hence we are developing a new approach that can handle thousands of sensors. It employs iterative conjugate gradient optimization based on an adjoint method and involves iterative time-reversed 3D wave propagation simulations using the spectral element method (SPECFEM3D). To test the developed method, and to investigate the effect of sensor density and quality on the inversion results, we have been considering kinematic and dynamic synthetic sources of several types: one or more Haskell pulses with various widths and spacings; scenarios with local rupture propagation in the opposite direction (as observed during the 2010 El Mayor-Cucapah earthquake); dynamic crack-like rupture, both subshear and supershear; and rupture that mimics supershear propagation by jumping along the fault. In each case, we produce the data by a forward SPECFEM3D calculation, choose the desired density of stations, filter the data to 1 Hz

  5. User's manual for DELSOL2: a computer code for calculating the optical performance and optimal system design for solar-thermal central-receiver plants

    SciTech Connect

    Dellin, T.A.; Fish, M.J.; Yang, C.L.

    1981-08-01

    DELSOL2 is a revised and substantially extended version of the DELSOL computer program for calculating collector field performance and layout, and optimal system design for solar thermal central receiver plants. The code consists of a detailed model of the optical performance, a simpler model of the non-optical performance, an algorithm for field layout, and a searching algorithm to find the best system design. The latter two features are coupled to a cost model of central receiver components and an economic model for calculating energy costs. The code can handle flat, focused and/or canted heliostats, and external cylindrical, multi-aperture cavity, and flat plate receivers. The program optimizes the tower height, receiver size, field layout, heliostat spacings, and tower position at user specified power levels subject to flux limits on the receiver and land constraints for field layout. The advantages of speed and accuracy characteristic of Version I are maintained in DELSOL2.

  6. Computational electromagnetics and parallel dense matrix computations

    SciTech Connect

    Forsman, K.; Kettunen, L.; Gropp, W.; Levine, D.

    1995-06-01

    We present computational results using CORAL, a parallel, three-dimensional, nonlinear magnetostatic code based on a volume integral equation formulation. A key feature of CORAL is the ability to solve, in parallel, the large, dense systems of linear equations that are inherent in the use of integral equation methods. Using the Chameleon and PSLES libraries ensures portability and access to the latest linear algebra solution technology.

  7. Memory-efficient table look-up optimized algorithm for context-based adaptive variable length decoding in H.264/advanced video coding

    NASA Astrophysics Data System (ADS)

    Wang, Jianhua; Cheng, Lianglun; Wang, Tao; Peng, Xiaodong

    2016-03-01

    Table look-up operation plays a very important role during the decoding processing of context-based adaptive variable length decoding (CAVLD) in H.264/advanced video coding (AVC). However, frequent table look-up operation can result in big table memory access, and then lead to high table power consumption. Aiming to solve the problem of big table memory access of current methods, and then reduce high power consumption, a memory-efficient table look-up optimized algorithm is presented for CAVLD. The contribution of this paper lies that index search technology is introduced to reduce big memory access for table look-up, and then reduce high table power consumption. Specifically, in our schemes, we use index search technology to reduce memory access by reducing the searching and matching operations for code_word on the basis of taking advantage of the internal relationship among length of zero in code_prefix, value of code_suffix and code_lengh, thus saving the power consumption of table look-up. The experimental results show that our proposed table look-up algorithm based on index search can lower about 60% memory access consumption compared with table look-up by sequential search scheme, and then save much power consumption for CAVLD in H.264/AVC.

  8. Multi-Sensor Detection with Particle Swarm Optimization for Time-Frequency Coded Cooperative WSNs Based on MC-CDMA for Underground Coal Mines.

    PubMed

    Xu, Jingjing; Yang, Wei; Zhang, Linyuan; Han, Ruisong; Shao, Xiaotao

    2015-01-01

    In this paper, a wireless sensor network (WSN) technology adapted to underground channel conditions is developed, which has important theoretical and practical value for safety monitoring in underground coal mines. According to the characteristics that the space, time and frequency resources of underground tunnel are open, it is proposed to constitute wireless sensor nodes based on multicarrier code division multiple access (MC-CDMA) to make full use of these resources. To improve the wireless transmission performance of source sensor nodes, it is also proposed to utilize cooperative sensors with good channel conditions from the sink node to assist source sensors with poor channel conditions. Moreover, the total power of the source sensor and its cooperative sensors is allocated on the basis of their channel conditions to increase the energy efficiency of the WSN. To solve the problem that multiple access interference (MAI) arises when multiple source sensors transmit monitoring information simultaneously, a kind of multi-sensor detection (MSD) algorithm with particle swarm optimization (PSO), namely D-PSO, is proposed for the time-frequency coded cooperative MC-CDMA WSN. Simulation results show that the average bit error rate (BER) performance of the proposed WSN in an underground coal mine is improved significantly by using wireless sensor nodes based on MC-CDMA, adopting time-frequency coded cooperative transmission and D-PSO algorithm with particle swarm optimization. PMID:26343660

  9. Multi-Sensor Detection with Particle Swarm Optimization for Time-Frequency Coded Cooperative WSNs Based on MC-CDMA for Underground Coal Mines

    PubMed Central

    Xu, Jingjing; Yang, Wei; Zhang, Linyuan; Han, Ruisong; Shao, Xiaotao

    2015-01-01

    In this paper, a wireless sensor network (WSN) technology adapted to underground channel conditions is developed, which has important theoretical and practical value for safety monitoring in underground coal mines. According to the characteristics that the space, time and frequency resources of underground tunnel are open, it is proposed to constitute wireless sensor nodes based on multicarrier code division multiple access (MC-CDMA) to make full use of these resources. To improve the wireless transmission performance of source sensor nodes, it is also proposed to utilize cooperative sensors with good channel conditions from the sink node to assist source sensors with poor channel conditions. Moreover, the total power of the source sensor and its cooperative sensors is allocated on the basis of their channel conditions to increase the energy efficiency of the WSN. To solve the problem that multiple access interference (MAI) arises when multiple source sensors transmit monitoring information simultaneously, a kind of multi-sensor detection (MSD) algorithm with particle swarm optimization (PSO), namely D-PSO, is proposed for the time-frequency coded cooperative MC-CDMA WSN. Simulation results show that the average bit error rate (BER) performance of the proposed WSN in an underground coal mine is improved significantly by using wireless sensor nodes based on MC-CDMA, adopting time-frequency coded cooperative transmission and D-PSO algorithm with particle swarm optimization. PMID:26343660

  10. Homological stabilizer codes

    SciTech Connect

    Anderson, Jonas T.

    2013-03-15

    In this paper we define homological stabilizer codes on qubits which encompass codes such as Kitaev's toric code and the topological color codes. These codes are defined solely by the graphs they reside on. This feature allows us to use properties of topological graph theory to determine the graphs which are suitable as homological stabilizer codes. We then show that all toric codes are equivalent to homological stabilizer codes on 4-valent graphs. We show that the topological color codes and toric codes correspond to two distinct classes of graphs. We define the notion of label set equivalencies and show that under a small set of constraints the only homological stabilizer codes without local logical operators are equivalent to Kitaev's toric code or to the topological color codes. - Highlights: Black-Right-Pointing-Pointer We show that Kitaev's toric codes are equivalent to homological stabilizer codes on 4-valent graphs. Black-Right-Pointing-Pointer We show that toric codes and color codes correspond to homological stabilizer codes on distinct graphs. Black-Right-Pointing-Pointer We find and classify all 2D homological stabilizer codes. Black-Right-Pointing-Pointer We find optimal codes among the homological stabilizer codes.

  11. Dense suspension splash

    NASA Astrophysics Data System (ADS)

    Dodge, Kevin M.; Peters, Ivo R.; Ellowitz, Jake; Schaarsberg, Martin H. Klein; Jaeger, Heinrich M.; Zhang, Wendy W.

    2014-11-01

    Impact of a dense suspension drop onto a solid surface at speeds of several meters-per-second splashes by ejecting individual liquid-coated particles. Suppression or reduction of this splash is important for thermal spray coating and additive manufacturing. Accomplishing this aim requires distinguishing whether the splash is generated by individual scattering events or by collective motion reminiscent of liquid flow. Since particle inertia dominates over surface tension and viscous drag in a strong splash, we model suspension splash using a discrete-particle simulation in which the densely packed macroscopic particles experience inelastic collisions but zero friction or cohesion. Numerical results based on this highly simplified model are qualitatively consistent with observations. They also show that approximately 70% of the splash is generated by collective motion. Here an initially downward-moving particle is ejected into the splash because it experiences a succession of low-momentum-change collisions whose effects do not cancel but instead accumulate. The remainder of the splash is generated by scattering events in which a small number of high-momentum-change collisions cause a particle to be ejected upwards. Current Address: Physics of Fluids Group, University of Twente, P.O. Box 217, 7500 AE Enschede, The Netherlands.

  12. Warm dense crystallography

    NASA Astrophysics Data System (ADS)

    Valenza, Ryan A.; Seidler, Gerald T.

    2016-03-01

    The intense femtosecond-scale pulses from x-ray free electron lasers (XFELs) are able to create and interrogate interesting states of matter characterized by long-lived nonequilibrium semicore or core electron occupancies or by the heating of dense phases via the relaxation cascade initiated by the photoelectric effect. We address here the latter case of "warm dense matter" (WDM) and investigate the observable consequences of x-ray heating of the electronic degrees of freedom in crystalline systems. We report temperature-dependent density functional theory calculations for the x-ray diffraction from crystalline LiF, graphite, diamond, and Be. We find testable, strong signatures of condensed-phase effects that emphasize the importance of wide-angle scattering to study nonequilibrium states. These results also suggest that the reorganization of the valence electron density at eV-scale temperatures presents a confounding factor to achieving atomic resolution in macromolecular serial femtosecond crystallography (SFX) studies at XFELs, as performed under the "diffract before destroy" paradigm.

  13. Brain-Generated Estradiol Drives Long-Term Optimization of Auditory Coding to Enhance the Discrimination of Communication Signals

    PubMed Central

    Tremere, Liisa A.; Pinaud, Raphael

    2011-01-01

    Auditory processing and hearing-related pathologies are heavily influenced by steroid hormones in a variety of vertebrate species including humans. The hormone estradiol has been recently shown to directly modulate the gain of central auditory neurons, in real-time, by controlling the strength of inhibitory transmission via a non-genomic mechanism. The functional relevance of this modulation, however, remains unknown. Here we show that estradiol generated in the songbird homologue of the mammalian auditory association cortex, rapidly enhances the effectiveness of the neural coding of complex, learned acoustic signals in awake zebra finches. Specifically, estradiol increases mutual information rates, coding efficiency and the neural discrimination of songs. These effects are mediated by estradiol’s modulation of both rate and temporal coding of auditory signals. Interference with the local action or production of estradiol in the auditory forebrain of freely-behaving animals disrupts behavioral responses to songs, but not to other behaviorally-relevant communication signals. Our findings directly show that estradiol is a key regulator of auditory function in the adult vertebrate brain. PMID:21368039

  14. Validation of a pair of computer codes for estimation and optimization of subsonic aerodynamic performance of simple hinged-flap systems for thin swept wings

    NASA Technical Reports Server (NTRS)

    Carlson, Harry W.; Darden, Christine M.

    1988-01-01

    Extensive correlations of computer code results with experimental data are employed to illustrate the use of linearized theory attached flow methods for the estimation and optimization of the aerodynamic performance of simple hinged flap systems. Use of attached flow methods is based on the premise that high levels of aerodynamic efficiency require a flow that is as nearly attached as circumstances permit. A variety of swept wing configurations are considered ranging from fighters to supersonic transports, all with leading- and trailing-edge flaps for enhancement of subsonic aerodynamic efficiency. The results indicate that linearized theory attached flow computer code methods provide a rational basis for the estimation and optimization of flap system aerodynamic performance at subsonic speeds. The analysis also indicates that vortex flap design is not an opposing approach but is closely related to attached flow design concepts. The successful vortex flap design actually suppresses the formation of detached vortices to produce a small vortex which is restricted almost entirely to the leading edge flap itself.

  15. Validation of a computer code for analysis of subsonic aerodynamic performance of wings with flaps in combination with a canard or horizontal tail and an application to optimization

    NASA Technical Reports Server (NTRS)

    Carlson, Harry W.; Darden, Christine M.; Mann, Michael J.

    1990-01-01

    Extensive correlations of computer code results with experimental data are employed to illustrate the use of a linearized theory, attached flow method for the estimation and optimization of the longitudinal aerodynamic performance of wing-canard and wing-horizontal tail configurations which may employ simple hinged flap systems. Use of an attached flow method is based on the premise that high levels of aerodynamic efficiency require a flow that is as nearly attached as circumstances permit. The results indicate that linearized theory, attached flow, computer code methods (modified to include estimated attainable leading-edge thrust and an approximate representation of vortex forces) provide a rational basis for the estimation and optimization of aerodynamic performance at subsonic speeds below the drag rise Mach number. Generally, good prediction of aerodynamic performance, as measured by the suction parameter, can be expected for near optimum combinations of canard or horizontal tail incidence and leading- and trailing-edge flap deflections at a given lift coefficient (conditions which tend to produce a predominantly attached flow).

  16. Colon specific CODES based Piroxicam tablet for colon targeting: statistical optimization, in vivo roentgenography and stability assessment.

    PubMed

    Singh, Amit Kumar; Pathak, Kamla

    2015-03-01

    This study was aimed to statistically optimize CODES™ based Piroxicam (PXM) tablet for colon targeting. A 3(2) full factorial design was used for preparation of core tablet that was subsequently coated to get CODES™ based tablet. The experimental design of core tablets comprised of two independent variables: amount of lactulose and PEG 6000, each at three different levels and the dependent variable was %CDR at 12 h. The core tablets were evaluated for pharmacopoeial and non-pharmacopoeial test and coated with optimized levels of Eudragit E100 followed by HPMC K15 and finally with Eudragit S100. The in vitro drug release study of F1-F9 was carried out by change over media method (0.1 N HCl buffer, pH 1.2, phosphate buffer, pH 7.4 and phosphate buffer, pH 6.8 with enzyme β-galactosidase 120 IU) to select optimized formulation F9 that was subjected to in vivo roentgenography. Roentgenography study corroborated the in vitro performance, thus providing the proof of concept. The experimental design was validated by extra check point formulation and Diffuse Reflectance Spectroscopy revealed absence of any interaction between drug and formulation excipients. The shelf life of F9 was deduced as 12 months. Conclusively, colon targeted CODES™ technology based PXM tablets were successfully optimized and its potential of colon targeting was validated by roentgenography. PMID:24266719

  17. Optimization and Parallelization of the Thermal-Hydraulic Sub-channel Code CTF for High-Fidelity Multi-physics Applications

    SciTech Connect

    Salko, Robert K; Schmidt, Rodney; Avramova, Maria N

    2014-01-01

    This paper describes major improvements to the computational infrastructure of the CTF sub-channel code so that full-core sub-channel-resolved simulations can now be performed in much shorter run-times, either in stand-alone mode or as part of coupled-code multi-physics calculations. These improvements support the goals of the Department Of Energy (DOE) Consortium for Advanced Simulations of Light Water (CASL) Energy Innovation Hub to develop high fidelity multi-physics simulation tools for nuclear energy design and analysis. A set of serial code optimizations--including fixing computational inefficiencies, optimizing the numerical approach, and making smarter data storage choices--are first described and shown to reduce both execution time and memory usage by about a factor of ten. Next, a Single Program Multiple Data (SPMD) parallelization strategy targeting distributed memory Multiple Instruction Multiple Data (MIMD) platforms and utilizing domain-decomposition is presented. In this approach, data communication between processors is accomplished by inserting standard MPI calls at strategic points in the code. The domain decomposition approach implemented assigns one MPI process to each fuel assembly, with each domain being represented by its own CTF input file. The creation of CTF input files, both for serial and parallel runs, is also fully automated through use of a pre-processor utility that takes a greatly reduced set of user input over the traditional CTF input file. To run CTF in parallel, two additional libraries are currently needed; MPI, for inter-processor message passing, and the Parallel Extensible Toolkit for Scientific Computation (PETSc), which is leveraged to solve the global pressure matrix in parallel. Results presented include a set of testing and verification calculations and performance tests assessing parallel scaling characteristics up to a full core, sub-channel-resolved model of Watts Bar Unit 1 under hot full-power conditions (193 17x17

  18. Dense Hypervelocity Plasma Jets

    NASA Astrophysics Data System (ADS)

    Witherspoon, F. Douglas; Case, Andrew; Phillips, Michael W.

    2006-10-01

    High velocity dense plasma jets are under continued experimental development for a variety of fusion applications including refueling, disruption mitigation, rotation drive, and magnetized target fusion. The technical goal is to accelerate plasma slugs of density >10^17 cm-3 and total mass >100 micrograms to velocities >200 km/s. The approach utilizes symmetrical injection of very high density plasma into a coaxial EM accelerator having a tailored cross-section geometry to prevent formation of the blow-by instability. Injected plasma is generated by electrothermal capillary discharges using either cylindrical capillaries or a newer toroidal spark gap arrangement that has worked at pressures as low as 3.5 x10-6 Torr in bench tests. Experimental plasma data will be presented for a complete 32 injector accelerator system recently built for driving rotation in the Maryland MCX experiment which utilizes the cylindrical capillaries, and also for a 50 spark gap test unit currently under construction.

  19. The characterization and optimization of NIO1 ion source extraction aperture using a 3D particle-in-cell code

    NASA Astrophysics Data System (ADS)

    Taccogna, F.; Minelli, P.; Cavenago, M.; Veltri, P.; Ippolito, N.

    2016-02-01

    The geometry of a single aperture in the extraction grid plays a relevant role for the optimization of negative ion transport and extraction probability in a hybrid negative ion source. For this reason, a three-dimensional particle-in-cell/Monte Carlo collision model of the extraction region around the single aperture including part of the source and part of the acceleration (up to the extraction grid (EG) middle) regions has been developed for the new aperture design prepared for negative ion optimization 1 source. Results have shown that the dimension of the flat and chamfered parts and the slope of the latter in front of the source region maximize the product of production rate and extraction probability (allowing the best EG field penetration) of surface-produced negative ions. The negative ion density in the plane yz has been reported.

  20. The characterization and optimization of NIO1 ion source extraction aperture using a 3D particle-in-cell code.

    PubMed

    Taccogna, F; Minelli, P; Cavenago, M; Veltri, P; Ippolito, N

    2016-02-01

    The geometry of a single aperture in the extraction grid plays a relevant role for the optimization of negative ion transport and extraction probability in a hybrid negative ion source. For this reason, a three-dimensional particle-in-cell/Monte Carlo collision model of the extraction region around the single aperture including part of the source and part of the acceleration (up to the extraction grid (EG) middle) regions has been developed for the new aperture design prepared for negative ion optimization 1 source. Results have shown that the dimension of the flat and chamfered parts and the slope of the latter in front of the source region maximize the product of production rate and extraction probability (allowing the best EG field penetration) of surface-produced negative ions. The negative ion density in the plane yz has been reported. PMID:26932027

  1. Geometrical Optics of Dense Aerosols

    SciTech Connect

    Hay, Michael J.; Valeo, Ernest J.; Fisch, Nathaniel J.

    2013-04-24

    Assembling a free-standing, sharp-edged slab of homogeneous material that is much denser than gas, but much more rare ed than a solid, is an outstanding technological challenge. The solution may lie in focusing a dense aerosol to assume this geometry. However, whereas the geometrical optics of dilute aerosols is a well-developed fi eld, the dense aerosol limit is mostly unexplored. Yet controlling the geometrical optics of dense aerosols is necessary in preparing such a material slab. Focusing dense aerosols is shown here to be possible, but the nite particle density reduces the eff ective Stokes number of the flow, a critical result for controlled focusing. __________________________________________________

  2. BUMPERII - DESIGN ANALYSIS CODE FOR OPTIMIZING SPACECRAFT SHIELDING AND WALL CONFIGURATION FOR ORBITAL DEBRIS AND METEOROID IMPACTS

    NASA Technical Reports Server (NTRS)

    Hill, S. A.

    1994-01-01

    BUMPERII is a modular program package employing a numerical solution technique to calculate a spacecraft's probability of no penetration (PNP) from man-made orbital debris or meteoroid impacts. The solution equation used to calculate the PNP is based on the Poisson distribution model for similar analysis of smaller craft, but reflects the more rigorous mathematical modeling of spacecraft geometry, orientation, and impact characteristics necessary for treatment of larger structures such as space station components. The technique considers the spacecraft surface in terms of a series of flat plate elements. It divides the threat environment into a number of finite cases, then evaluates each element of each threat. The code allows for impact shielding (shadowing) of one element by another in various configurations over the spacecraft exterior, and also allows for the effects of changing spacecraft flight orientation and attitude. Four main modules comprise the overall BUMPERII package: GEOMETRY, RESPONSE, SHIELD, and CONTOUR. The GEOMETRY module accepts user-generated finite element model (FEM) representations of the spacecraft geometry and creates geometry databases for both meteoroid and debris analysis. The GEOMETRY module expects input to be in either SUPERTAB Universal File Format or PATRAN Neutral File Format. The RESPONSE module creates wall penetration response databases, one for meteoroid analysis and one for debris analysis, for up to 100 unique wall configurations. This module also creates a file containing critical diameter as a function of impact velocity and impact angle for each wall configuration. The SHIELD module calculates the PNP for the modeled structure given exposure time, operating altitude, element ID ranges, and the data from the RESPONSE and GEOMETRY databases. The results appear in a summary file. SHIELD will also determine the effective area of the components and the overall model, and it can produce a data file containing the probability

  3. Ariel's Densely Pitted Surface

    NASA Technical Reports Server (NTRS)

    1986-01-01

    This mosaic of the four highest-resolution images of Ariel represents the most detailed Voyager 2 picture of this satellite of Uranus. The images were taken through the clear filter of Voyager's narrow-angle camera on Jan. 24, 1986, at a distance of about 130,000 kilometers (80,000 miles). Ariel is about 1,200 km (750 mi) in diameter; the resolution here is 2.4 km (1.5 mi). Much of Ariel's surface is densely pitted with craters 5 to 10 km (3 to 6 mi) across. These craters are close to the threshold of detection in this picture. Numerous valleys and fault scarps crisscross the highly pitted terrain. Voyager scientists believe the valleys have formed over down-dropped fault blocks (graben); apparently, extensive faulting has occurred as a result of expansion and stretching of Ariel's crust. The largest fault valleys, near the terminator at right, as well as a smooth region near the center of this image, have been partly filled with deposits that are younger and less heavily cratered than the pitted terrain. Narrow, somewhat sinuous scarps and valleys have been formed, in turn, in these young deposits. It is not yet clear whether these sinuous features have been formed by faulting or by the flow of fluids.

    JPL manages the Voyager project for NASA's Office of Space Science.

  4. Dense Hypervelocity Plasma Jets

    NASA Astrophysics Data System (ADS)

    Case, Andrew; Witherspoon, F. Douglas; Messer, Sarah; Bomgardner, Richard; Phillips, Michael; van Doren, David; Elton, Raymond; Uzun-Kaymak, Ilker

    2007-11-01

    We are developing high velocity dense plasma jets for fusion and HEDP applications. Traditional coaxial plasma accelerators suffer from the blow-by instability which limits the mass accelerated to high velocity. In the current design blow-by is delayed by a combination of electrode shaping and use of a tailored plasma armature created by injection of a high density plasma at a few eV generated by arrays of capillary discharges or sparkgaps. Experimental data will be presented for a complete 32 injector gun system built for driving rotation in the Maryland MCX experiment, including data on penetration of the plasma jet through a magnetic field. We present spectroscopic measurements of plasma velocity, temperature, and density, as well as total momentum measured using a ballistic pendulum. Measurements are in agreement with each other and with time of flight data from photodiodes and a multichannel PMT. Plasma density is above 10^15 cm-3, velocities range up to about 100 km/s. Preliminary results from a quadrature heterodyne HeNe interferometer are consistent with these results.

  5. Multi-scaling of the dense plasma focus

    NASA Astrophysics Data System (ADS)

    Saw, S. H.; Lee, S.

    2015-03-01

    The dense plasma focus is a copious source of multi-radiations with many potential new applications of special interest such as in advanced SXR lithography, materials synthesizing and testing, medical isotopes and imaging. This paper reviews the series of numerical experiments conducted using the Lee model code to obtain the scaling laws of the multi-radiations.

  6. Data-driven input variable selection for rainfall-runoff modeling using binary-coded particle swarm optimization and Extreme Learning Machines

    NASA Astrophysics Data System (ADS)

    Taormina, Riccardo; Chau, Kwok-Wing

    2015-10-01

    Selecting an adequate set of inputs is a critical step for successful data-driven streamflow prediction. In this study, we present a novel approach for Input Variable Selection (IVS) that employs Binary-coded discrete Fully Informed Particle Swarm optimization (BFIPS) and Extreme Learning Machines (ELM) to develop fast and accurate IVS algorithms. A scheme is employed to encode the subset of selected inputs and ELM specifications into the binary particles, which are evolved using single objective and multi-objective BFIPS optimization (MBFIPS). The performances of these ELM-based methods are assessed using the evaluation criteria and the datasets included in the comprehensive IVS evaluation framework proposed by Galelli et al. (2014). From a comparison with 4 major IVS techniques used in their original study it emerges that the proposed methods compare very well in terms of selection accuracy. The best performers were found to be (1) a MBFIPS-ELM algorithm based on the concurrent minimization of an error function and the number of selected inputs, and (2) a BFIPS-ELM algorithm based on the minimization of a variant of the Akaike Information Criterion (AIC). The first technique is arguably the most accurate overall, and is able to reach an almost perfect specification of the optimal input subset for a partially synthetic rainfall-runoff experiment devised for the Kentucky River basin. In addition, MBFIPS-ELM allows for the determination of the relative importance of the selected inputs. On the other hand, the BFIPS-ELM is found to consistently reach high accuracy scores while being considerably faster. By extrapolating the results obtained on the IVS test-bed, it can be concluded that the proposed techniques are particularly suited for rainfall-runoff modeling applications characterized by high nonlinearity in the catchment dynamics.

  7. Concatenated Coding Using Trellis-Coded Modulation

    NASA Technical Reports Server (NTRS)

    Thompson, Michael W.

    1997-01-01

    In the late seventies and early eighties a technique known as Trellis Coded Modulation (TCM) was developed for providing spectrally efficient error correction coding. Instead of adding redundant information in the form of parity bits, redundancy is added at the modulation stage thereby increasing bandwidth efficiency. A digital communications system can be designed to use bandwidth-efficient multilevel/phase modulation such as Amplitude Shift Keying (ASK), Phase Shift Keying (PSK), Differential Phase Shift Keying (DPSK) or Quadrature Amplitude Modulation (QAM). Performance gain can be achieved by increasing the number of signals over the corresponding uncoded system to compensate for the redundancy introduced by the code. A considerable amount of research and development has been devoted toward developing good TCM codes for severely bandlimited applications. More recently, the use of TCM for satellite and deep space communications applications has received increased attention. This report describes the general approach of using a concatenated coding scheme that features TCM and RS coding. Results have indicated that substantial (6-10 dB) performance gains can be achieved with this approach with comparatively little bandwidth expansion. Since all of the bandwidth expansion is due to the RS code we see that TCM based concatenated coding results in roughly 10-50% bandwidth expansion compared to 70-150% expansion for similar concatenated scheme which use convolution code. We stress that combined coding and modulation optimization is important for achieving performance gains while maintaining spectral efficiency.

  8. A highly optimized code for calculating atomic data at neutron star magnetic field strengths using a doubly self-consistent Hartree-Fock-Roothaan method

    NASA Astrophysics Data System (ADS)

    Schimeczek, C.; Engel, D.; Wunner, G.

    2012-07-01

    account the shielding of the core potential for outer electrons by inner electrons, and an optimal finite-element decomposition of each individual longitudinal wave function. These measures largely enhance the convergence properties compared to the previous code, and lead to speed-ups by factors up to two orders of magnitude compared with the implementation of the Hartree-Fock-Roothaan method used by Engel and Wunner in [D. Engel, G. Wunner, Phys. Rev. A 78 (2008) 032515]. New version program summaryProgram title: HFFER II Catalogue identifier: AECC_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECC_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: v 55 130 No. of bytes in distributed program, including test data, etc.: 293 700 Distribution format: tar.gz Programming language: Fortran 95 Computer: Cluster of 1-13 HP Compaq dc5750 Operating system: Linux Has the code been vectorized or parallelized?: Yes, parallelized using MPI directives. RAM: 1 GByte per node Classification: 2.1 External routines: MPI/GFortran, LAPACK, BLAS, FMlib (included in the package) Catalogue identifier of previous version: AECC_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 302 Does the new version supersede the previous version?: Yes Nature of problem: Quantitative modellings of features observed in the X-ray spectra of isolated magnetic neutron stars are hampered by the lack of sufficiently large and accurate databases for atoms and ions up to the last fusion product, iron, at strong magnetic field strengths. Our code is intended to provide a powerful tool for calculating energies and oscillator strengths of medium-Z atoms and ions at neutron star magnetic field strengths with sufficient accuracy in a routine way to create such databases. Solution method: The

  9. Population kinetics in dense plasmas

    SciTech Connect

    Schlanges, M.; Bornath, T.; Prenzel, R.; Kremp, D.

    1996-07-01

    Starting from quantum kinetic equations, rate equations for the number densities of the different atomic states and equations for the energy density are derived which are valid for dense nonideal plasmas. Statistical expressions are presented for the rate coefficients taking into account many-body effects as dynamical screening, lowering of the ionization energy and Pauli-blocking. Based on these generalized expressions, the coefficients of impact ionization, three-body recombination, excitation and deexcitation are calculated for nonideal hydrogen and carbon plasmas. As a result, higher ionization and recombination rates are obtained in the dense plasma region. The influence of the many-body effects on the population kinetics, including density and temperature relaxation, is shown then for a dense hydrogen plasma. {copyright} {ital 1996 American Institute of Physics.}

  10. Dense LU Factorization on Multicore Supercomputer Nodes

    SciTech Connect

    Lifflander, Jonathan; Miller, Phil; Venkataraman, Ramprasad; Arya, Anshu; Jones, Terry R; Kale, Laxmikant V

    2012-01-01

    Dense LU factorization is a prominent benchmark used to rank the performance of supercomputers. Many implementations, including the reference code HPL, use block-cyclic distributions of matrix blocks onto a two-dimensional process grid. The process grid dimensions drive a trade-off between communication and computation and are architecture- and implementation-sensitive. We show how the critical panel factorization steps can be made less communication-bound by overlapping asynchronous collectives for pivot identification and exchange with the computation of rank-k updates. By shifting this trade-off, a modified block-cyclic distribution can beneficially exploit more available parallelism on the critical path, and reduce panel factorization's memory hierarchy contention on now-ubiquitous multi-core architectures. The missed parallelism in traditional block-cyclic distributions arises because active panel factorization, triangular solves, and subsequent broadcasts are spread over single process columns or rows (respectively) of the process grid. Increasing one dimension of the process grid decreases the number of distinct processes in the other dimension. To increase parallelism in both dimensions, periodic 'rotation' is applied to the process grid to recover the row-parallelism lost by a tall process grid. During active panel factorization, rank-1 updates stream through memory with minimal reuse. In a column-major process grid, the performance of this access pattern degrades as too many streaming processors contend for access to memory. A block-cyclic mapping in the more popular row-major order does not encounter this problem, but consequently sacrifices node and network locality in the critical pivoting steps. We introduce 'striding' to vary between the two extremes of row- and column-major process grids. As a test-bed for further mapping experiments, we describe a dense LU implementation that allows a block distribution to be defined as a general function of block

  11. On the Grammar of Code-Switching.

    ERIC Educational Resources Information Center

    Bhatt, Rakesh M.

    1996-01-01

    Explores an Optimality-Theoretic approach to account for observed cross-linguistic patterns of code switching that assumes that code switching strives for well-formedness. Optimization of well-formedness in code switching is shown to follow from (violable) ranked constraints. An argument is advanced that code-switching patterns emerge from…

  12. A look at scalable dense linear algebra libraries

    SciTech Connect

    Dongarra, J.J. |; van de Geijn, R.; Walker, D.W.

    1992-07-01

    We discuss the essential design features of a library of scalable software for performing dense linear algebra computations on distributed memory concurrent computers. The square block scattered decomposition is proposed as a flexible and general-purpose way of decomposing most, if not all, dense matrix problems. An object- oriented interface to the library permits more portable applications to be written, and is easy to learn and use, since details of the parallel implementation are hidden from the user. Experiments on the Intel Touchstone Delta system with a prototype code that uses the square block scattered decomposition to perform LU factorization are presented and analyzed. It was found that the code was both scalable and efficient, performing at about 14 Gflop/s (double precision) for the largest problem considered.

  13. Clinical coding. Code breakers.

    PubMed

    Mathieson, Steve

    2005-02-24

    --The advent of payment by results has seen the role of the clinical coder pushed to the fore in England. --Examinations for a clinical coding qualification began in 1999. In 2004, approximately 200 people took the qualification. --Trusts are attracting people to the role by offering training from scratch or through modern apprenticeships. PMID:15768716

  14. Legacy Code Modernization

    NASA Technical Reports Server (NTRS)

    Hribar, Michelle R.; Frumkin, Michael; Jin, Haoqiang; Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    Over the past decade, high performance computing has evolved rapidly; systems based on commodity microprocessors have been introduced in quick succession from at least seven vendors/families. Porting codes to every new architecture is a difficult problem; in particular, here at NASA, there are many large CFD applications that are very costly to port to new machines by hand. The LCM ("Legacy Code Modernization") Project is the development of an integrated parallelization environment (IPE) which performs the automated mapping of legacy CFD (Fortran) applications to state-of-the-art high performance computers. While most projects to port codes focus on the parallelization of the code, we consider porting to be an iterative process consisting of several steps: 1) code cleanup, 2) serial optimization,3) parallelization, 4) performance monitoring and visualization, 5) intelligent tools for automated tuning using performance prediction and 6) machine specific optimization. The approach for building this parallelization environment is to build the components for each of the steps simultaneously and then integrate them together. The demonstration will exhibit our latest research in building this environment: 1. Parallelizing tools and compiler evaluation. 2. Code cleanup and serial optimization using automated scripts 3. Development of a code generator for performance prediction 4. Automated partitioning 5. Automated insertion of directives. These demonstrations will exhibit the effectiveness of an automated approach for all the steps involved with porting and tuning a legacy code application for a new architecture.

  15. Validation of spatiotemporally dense springtime land surface phenology with intensive and upscale in situ

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Land surface phenology (LSP) developed using temporally and spatially optimized remote sensing data, is particularly promising for use in detailed ecosystem monitoring and modeling efforts. Validating spatiotemporally dense LSP using compatible (intensively collected) in situ phenological data is t...

  16. SU-E-T-590: Optimizing Magnetic Field Strengths with Matlab for An Ion-Optic System in Particle Therapy Consisting of Two Quadrupole Magnets for Subsequent Simulations with the Monte-Carlo Code FLUKA

    SciTech Connect

    Baumann, K; Weber, U; Simeonov, Y; Zink, K

    2015-06-15

    Purpose: Aim of this study was to optimize the magnetic field strengths of two quadrupole magnets in a particle therapy facility in order to obtain a beam quality suitable for spot beam scanning. Methods: The particle transport through an ion-optic system of a particle therapy facility consisting of the beam tube, two quadrupole magnets and a beam monitor system was calculated with the help of Matlab by using matrices that solve the equation of motion of a charged particle in a magnetic field and field-free region, respectively. The magnetic field strengths were optimized in order to obtain a circular and thin beam spot at the iso-center of the therapy facility. These optimized field strengths were subsequently transferred to the Monte-Carlo code FLUKA and the transport of 80 MeV/u C12-ions through this ion-optic system was calculated by using a user-routine to implement magnetic fields. The fluence along the beam-axis and at the iso-center was evaluated. Results: The magnetic field strengths could be optimized by using Matlab and transferred to the Monte-Carlo code FLUKA. The implementation via a user-routine was successful. Analyzing the fluence-pattern along the beam-axis the characteristic focusing and de-focusing effects of the quadrupole magnets could be reproduced. Furthermore the beam spot at the iso-center was circular and significantly thinner compared to an unfocused beam. Conclusion: In this study a Matlab tool was developed to optimize magnetic field strengths for an ion-optic system consisting of two quadrupole magnets as part of a particle therapy facility. These magnetic field strengths could subsequently be transferred to and implemented in the Monte-Carlo code FLUKA to simulate the particle transport through this optimized ion-optic system.

  17. Warm Dense Matter: An Overview

    SciTech Connect

    Kalantar, D H; Lee, R W; Molitoris, J D

    2004-04-21

    This document provides a summary of the ''LLNL Workshop on Extreme States of Materials: Warm Dense Matter to NIF'' which was held on 20, 21, and 22 February 2002 at the Wente Conference Center in Livermore, CA. The warm dense matter regime, the transitional phase space region between cold material and hot plasma, is presently poorly understood. The drive to understand the nature of matter in this regime is sparking scientific activity worldwide. In addition to pure scientific interest, finite temperature dense matter occurs in the regimes of interest to the SSMP (Stockpile Stewardship Materials Program). So that obtaining a better understanding of WDM is important to performing effective experiments at, e.g., NIF, a primary mission of LLNL. At this workshop we examined current experimental and theoretical work performed at, and in conjunction with, LLNL to focus future activities and define our role in this rapidly emerging research area. On the experimental front LLNL plays a leading role in three of the five relevant areas and has the opportunity to become a major player in the other two. Discussion at the workshop indicated that the path forward for the experimental efforts at LLNL were two fold: First, we are doing reasonable baseline work at SPLs, HE, and High Energy Lasers with more effort encouraged. Second, we need to plan effectively for the next evolution in large scale facilities, both laser (NIF) and Light/Beam sources (LCLS/TESLA and GSI) Theoretically, LLNL has major research advantages in areas as diverse as the thermochemical approach to warm dense matter equations of state to first principles molecular dynamics simulations. However, it was clear that there is much work to be done theoretically to understand warm dense matter. Further, there is a need for a close collaboration between the generation of verifiable experimental data that can provide benchmarks of both the experimental techniques and the theoretical capabilities. The conclusion of this

  18. Transonic aerodynamics of dense gases. M.S. Thesis - Virginia Polytechnic Inst. and State Univ., Apr. 1990

    NASA Technical Reports Server (NTRS)

    Morren, Sybil Huang

    1991-01-01

    Transonic flow of dense gases for two-dimensional, steady-state, flow over a NACA 0012 airfoil was predicted analytically. The computer code used to model the dense gas behavior was a modified version of Jameson's FL052 airfoil code. The modifications to the code enabled modeling the dense gas behavior near the saturated vapor curve and critical pressure region where the fundamental derivative, Gamma, is negative. This negative Gamma region is of interest because the nonclassical gas behavior such as formation and propagation of expansion shocks, and the disintegration of inadmissible compression shocks may exist. The results indicated that dense gases with undisturbed thermodynamic states in the negative Gamma region show a significant reduction in the extent of the transonic regime as compared to that predicted by the perfect gas theory. The results support existing theories and predictions of the nonclassical, dense gas behavior from previous investigations.

  19. Boundary Preserving Dense Local Regions.

    PubMed

    Kim, Jaechul; Grauman, Kristen

    2015-05-01

    We propose a dense local region detector to extract features suitable for image matching and object recognition tasks. Whereas traditional local interest operators rely on repeatable structures that often cross object boundaries (e.g., corners, scale-space blobs), our sampling strategy is driven by segmentation, and thus preserves object boundaries and shape. At the same time, whereas existing region-based representations are sensitive to segmentation parameters and object deformations, our novel approach to robustly sample dense sites and determine their connectivity offers better repeatability. In extensive experiments, we find that the proposed region detector provides significantly better repeatability and localization accuracy for object matching compared to an array of existing feature detectors. In addition, we show our regions lead to excellent results on two benchmark tasks that require good feature matching: weakly supervised foreground discovery and nearest neighbor-based object recognition. PMID:26353319

  20. An efficient fully atomistic potential model for dense fluid methane

    NASA Astrophysics Data System (ADS)

    Jiang, Chuntao; Ouyang, Jie; Zhuang, Xin; Wang, Lihua; Li, Wuming

    2016-08-01

    A fully atomistic model aimed to obtain a general purpose model for the dense fluid methane is presented. The new optimized potential for liquid simulation (OPLS) model is a rigid five site model which consists of five fixed point charges and five Lennard-Jones centers. The parameters in the potential model are determined by a fit of the experimental data of dense fluid methane using molecular dynamics simulation. The radial distribution function and the diffusion coefficient are successfully calculated for dense fluid methane at various state points. The simulated results are in good agreement with the available experimental data shown in literature. Moreover, the distribution of mean number hydrogen bonds and the distribution of pair-energy are analyzed, which are obtained from the new model and other five reference potential models. Furthermore, the space-time correlation functions for dense fluid methane are also discussed. All the numerical results demonstrate that the new OPLS model could be well utilized to investigate the dense fluid methane.

  1. Dense, finely, grained composite materials

    DOEpatents

    Dunmead, Stephen D.; Holt, Joseph B.; Kingman, Donald D.; Munir, Zuhair A.

    1990-01-01

    Dense, finely grained composite materials comprising one or more ceramic phase or phase and one or more metallic and/or intermetallic phase or phases are produced by combustion synthesis. Spherical ceramic grains are homogeneously dispersed within the matrix. Methods are provided, which include the step of applying mechanical pressure during or immediately after ignition, by which the microstructures in the resulting composites can be controllably selected.

  2. Dense periodic packings of tori

    NASA Astrophysics Data System (ADS)

    Gabbrielli, Ruggero; Jiao, Yang; Torquato, Salvatore

    2014-02-01

    Dense packings of nonoverlapping bodies in three-dimensional Euclidean space R3 are useful models of the structure of a variety of many-particle systems that arise in the physical and biological sciences. Here we investigate the packing behavior of congruent ring tori in R3, which are multiply connected nonconvex bodies of genus 1, as well as horn and spindle tori. Specifically, we analytically construct a family of dense periodic packings of unlinked tori guided by the organizing principles originally devised for simply connected solid bodies [22 Torquato and Jiao, Phys. Rev. E 86, 011102 (2012), 10.1103/PhysRevE.86.011102]. We find that the horn tori as well as certain spindle and ring tori can achieve a packing density not only higher than that of spheres (i.e., π /√18 =0.7404...) but also higher than the densest known ellipsoid packings (i.e., 0.7707...). In addition, we study dense packings of clusters of pair-linked ring tori (i.e., Hopf links), which can possess much higher densities than corresponding packings consisting of unlinked tori.

  3. Dense, Viscous Brine Behavior in Heterogeneous Porous Medium Systems

    PubMed Central

    Wright, D. Johnson; Pedit, J.A.; Gasda, S.E.; Farthing, M.W.; Murphy, L.L.; Knight, S.R.; Brubaker, G.R.

    2010-01-01

    The behavior of dense, viscous calcium bromide brine solutions used to remediate systems contaminated with dense nonaqueous phase liquids (DNAPLs) is considered in laboratory and field porous medium systems. The density and viscosity of brine solutions are experimentally investigated and functional forms fit over a wide range of mass fractions. A density of 1.7 times, and a corresponding viscosity of 6.3 times, that of water is obtained at a calcium bromide mass fraction of 0.53. A three-dimensional laboratory cell is used to investigate the establishment, persistence, and rate of removal of a stratified dense brine layer in a controlled system. Results from a field-scale experiment performed at the Dover National Test Site are used to investigate the ability to establish and maintain a dense brine layer as a component of a DNAPL recovery strategy, and to recover the brine at sufficiently high mass fractions to support the economical reuse of the brine. The results of both laboratory and field experiments show that a dense brine layer can be established, maintained, and recovered to a significant extent. Regions of unstable density profiles are shown to develop and persist in the field-scale experiment, which we attribute to regions of low hydraulic conductivity. The saturated-unsaturated, variable-density ground-water flow simulation code SUTRA is modified to describe the system of interest, and used to compare simulations to experimental observations and to investigate certain unobserved aspects of these complex systems. The model results show that the standard model formulation is not appropriate for capturing the behavior of sharp density gradients observed during the dense brine experiments. PMID:20444520

  4. Constructing Dense Graphs with Unique Hamiltonian Cycles

    ERIC Educational Resources Information Center

    Lynch, Mark A. M.

    2012-01-01

    It is not difficult to construct dense graphs containing Hamiltonian cycles, but it is difficult to generate dense graphs that are guaranteed to contain a unique Hamiltonian cycle. This article presents an algorithm for generating arbitrarily large simple graphs containing "unique" Hamiltonian cycles. These graphs can be turned into dense graphs…

  5. Optimization of geometry, material and economic parameters of a two-zone subcritical reactor for transmutation of nuclear waste with SERPENT Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Gulik, Volodymyr; Tkaczyk, Alan Henry

    2014-06-01

    An optimization study of a subcritical two-zone homogeneous reactor was carried out, taking into consideration geometry, material, and economic parameters. The advantage of a two-zone subcritical system over a single-zone system is demonstrated. The study investigated the optimal volume ratio for the inner and outer zones of the subcritical reactor, in terms of the neutron-physical parameters as well as fuel cost. Optimal geometrical parameters of the system are suggested for different material compositions.

  6. Probing cold dense nuclear matter.

    PubMed

    Subedi, R; Shneor, R; Monaghan, P; Anderson, B D; Aniol, K; Annand, J; Arrington, J; Benaoum, H; Benmokhtar, F; Boeglin, W; Chen, J-P; Choi, Seonho; Cisbani, E; Craver, B; Frullani, S; Garibaldi, F; Gilad, S; Gilman, R; Glamazdin, O; Hansen, J-O; Higinbotham, D W; Holmstrom, T; Ibrahim, H; Igarashi, R; de Jager, C W; Jans, E; Jiang, X; Kaufman, L J; Kelleher, A; Kolarkar, A; Kumbartzki, G; Lerose, J J; Lindgren, R; Liyanage, N; Margaziotis, D J; Markowitz, P; Marrone, S; Mazouz, M; Meekins, D; Michaels, R; Moffit, B; Perdrisat, C F; Piasetzky, E; Potokar, M; Punjabi, V; Qiang, Y; Reinhold, J; Ron, G; Rosner, G; Saha, A; Sawatzky, B; Shahinyan, A; Sirca, S; Slifer, K; Solvignon, P; Sulkosky, V; Urciuoli, G M; Voutier, E; Watson, J W; Weinstein, L B; Wojtsekhowski, B; Wood, S; Zheng, X-C; Zhu, L

    2008-06-13

    The protons and neutrons in a nucleus can form strongly correlated nucleon pairs. Scattering experiments, in which a proton is knocked out of the nucleus with high-momentum transfer and high missing momentum, show that in carbon-12 the neutron-proton pairs are nearly 20 times as prevalent as proton-proton pairs and, by inference, neutron-neutron pairs. This difference between the types of pairs is due to the nature of the strong force and has implications for understanding cold dense nuclear systems such as neutron stars. PMID:18511658

  7. Probing Cold Dense Nuclear Matter

    SciTech Connect

    Subedi, Ramesh; Shneor, R.; Monaghan, Peter; Anderson, Bryon; Aniol, Konrad; Annand, John; Arrington, John; Benaoum, Hachemi; Benmokhtar, Fatiha; Bertozzi, William; Boeglin, Werner; Chen, Jian-Ping; Choi, Seonho; Cisbani, Evaristo; Craver, Brandon; Frullani, Salvatore; Garibaldi, Franco; Gilad, Shalev; Gilman, Ronald; Glamazdin, Oleksandr; Hansen, Jens-Ole; Higinbotham, Douglas; Holmstrom, Timothy; Ibrahim, Hassan; Igarashi, Ryuichi; De Jager, Cornelis; Jans, Eddy; Jiang, Xiaodong; Kaufman, Lisa; Kelleher, Aidan; Kolarkar, Ameya; Kumbartzki, Gerfried; LeRose, John; Lindgren, Richard; Liyanage, Nilanga; Margaziotis, Demetrius; Markowitz, Pete; Marrone, Stefano; Mazouz, Malek; Meekins, David; Michaels, Robert; Moffit, Bryan; Perdrisat, Charles; Piasetzky, Eliazer; Potokar, Milan; Punjabi, Vina; Qiang, Yi; Reinhold, Joerg; Ron, Guy; Rosner, Guenther; Saha, Arunava; Sawatzky, Bradley; Shahinyan, Albert; Sirca, Simon; Slifer, Karl; Solvignon, Patricia; Sulkosky, Vince; Sulkosky, Vincent; Sulkosky, Vince; Sulkosky, Vincent; Urciuoli, Guido; Voutier, Eric; Watson, John; Weinstein, Lawrence; Wojtsekhowski, Bogdan; Wood, Stephen; Zheng, Xiaochao; Zhu, Lingyan

    2008-06-01

    The protons and neutrons in a nucleus can form strongly correlated nucleon pairs. Scattering experiments, in which a proton is knocked out of the nucleus with high-momentum transfer and high missing momentum, show that in carbon-12 the neutron-proton pairs are nearly 20 times as prevalent as proton-proton pairs and, by inference, neutron-neutron pairs. This difference between the types of pairs is due to the nature of the strong force and has implications for understanding cold dense nuclear systems such as neutron stars.

  8. Magnetism in Dense Quark Matter

    NASA Astrophysics Data System (ADS)

    Ferrer, Efrain J.; de la Incera, Vivian

    We review the mechanisms via which an external magnetic field can affect the ground state of cold and dense quark matter. In the absence of a magnetic field, at asymptotically high densities, cold quark matter is in the Color-Flavor-Locked (CFL) phase of color superconductivity characterized by three scales: the superconducting gap, the gluon Meissner mass, and the baryonic chemical potential. When an applied magnetic field becomes comparable with each of these scales, new phases and/or condensates may emerge. They include the magnetic CFL (MCFL) phase that becomes relevant for fields of the order of the gap scale; the paramagnetic CFL, important when the field is of the order of the Meissner mass, and a spin-one condensate associated to the magnetic moment of the Cooper pairs, significant at fields of the order of the chemical potential. We discuss the equation of state (EoS) of MCFL matter for a large range of field values and consider possible applications of the magnetic effects on dense quark matter to the astrophysics of compact stars.

  9. Inference by replication in densely connected systems.

    PubMed

    Neirotti, Juan P; Saad, David

    2007-10-01

    An efficient Bayesian inference method for problems that can be mapped onto dense graphs is presented. The approach is based on message passing where messages are averaged over a large number of replicated variable systems exposed to the same evidential nodes. An assumption about the symmetry of the solutions is required for carrying out the averages; here we extend the previous derivation based on a replica-symmetric- (RS)-like structure to include a more complex one-step replica-symmetry-breaking-like (1RSB-like) ansatz. To demonstrate the potential of the approach it is employed for studying critical properties of the Ising linear perceptron and for multiuser detection in code division multiple access (CDMA) under different noise models. Results obtained under the RS assumption in the noncritical regime give rise to a highly efficient signal detection algorithm in the context of CDMA; while in the critical regime one observes a first-order transition line that ends in a continuous phase transition point. Finite size effects are also observed. While the 1RSB ansatz is not required for the original problems, it was applied to the CDMA signal detection problem with a more complex noise model that exhibits RSB behavior, resulting in an improvement in performance. PMID:17995074

  10. Inference by replication in densely connected systems

    SciTech Connect

    Neirotti, Juan P.; Saad, David

    2007-10-15

    An efficient Bayesian inference method for problems that can be mapped onto dense graphs is presented. The approach is based on message passing where messages are averaged over a large number of replicated variable systems exposed to the same evidential nodes. An assumption about the symmetry of the solutions is required for carrying out the averages; here we extend the previous derivation based on a replica-symmetric- (RS)-like structure to include a more complex one-step replica-symmetry-breaking-like (1RSB-like) ansatz. To demonstrate the potential of the approach it is employed for studying critical properties of the Ising linear perceptron and for multiuser detection in code division multiple access (CDMA) under different noise models. Results obtained under the RS assumption in the noncritical regime give rise to a highly efficient signal detection algorithm in the context of CDMA; while in the critical regime one observes a first-order transition line that ends in a continuous phase transition point. Finite size effects are also observed. While the 1RSB ansatz is not required for the original problems, it was applied to the CDMA signal detection problem with a more complex noise model that exhibits RSB behavior, resulting in an improvement in performance.

  11. Speech coding

    NASA Astrophysics Data System (ADS)

    Gersho, Allen

    1990-05-01

    Recent advances in algorithms and techniques for speech coding now permit high quality voice reproduction at remarkably low bit rates. The advent of powerful single-ship signal processors has made it cost effective to implement these new and sophisticated speech coding algorithms for many important applications in voice communication and storage. Some of the main ideas underlying the algorithms of major interest today are reviewed. The concept of removing redundancy by linear prediction is reviewed, first in the context of predictive quantization or DPCM. Then linear predictive coding, adaptive predictive coding, and vector quantization are discussed. The concepts of excitation coding via analysis-by-synthesis, vector sum excitation codebooks, and adaptive postfiltering are explained. The main idea of vector excitation coding (VXC) or code excited linear prediction (CELP) are presented. Finally low-delay VXC coding and phonetic segmentation for VXC are described.

  12. Gear optimization

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.; Chen, Xiang; Zhang, Ning-Tian

    1988-01-01

    The use of formal numerical optimization methods for the design of gears is investigated. To achieve this, computer codes were developed for the analysis of spur gears and spiral bevel gears. These codes calculate the life, dynamic load, bending strength, surface durability, gear weight and size, and various geometric parameters. It is necessary to calculate all such important responses because they all represent competing requirements in the design process. The codes developed here were written in subroutine form and coupled to the COPES/ADS general purpose optimization program. This code allows the user to define the optimization problem at the time of program execution. Typical design variables include face width, number of teeth and diametral pitch. The user is free to choose any calculated response as the design objective to minimize or maximize and may impose lower and upper bounds on any calculated responses. Typical examples include life maximization with limits on dynamic load, stress, weight, etc. or minimization of weight subject to limits on life, dynamic load, etc. The research codes were written in modular form for easy expansion and so that they could be combined to create a multiple reduction optimization capability in future.

  13. Human Action Recognition Using Improved Salient Dense Trajectories

    PubMed Central

    Li, Qingwu; Cheng, Haisu; Zhou, Yan; Huo, Guanying

    2016-01-01

    Human action recognition in videos is a topic of active research in computer vision. Dense trajectory (DT) features were shown to be efficient for representing videos in state-of-the-art approaches. In this paper, we present a more effective approach of video representation using improved salient dense trajectories: first, detecting the motion salient region and extracting the dense trajectories by tracking interest points in each spatial scale separately and then refining the dense trajectories via the analysis of the motion saliency. Then, we compute several descriptors (i.e., trajectory displacement, HOG, HOF, and MBH) in the spatiotemporal volume aligned with the trajectories. Finally, in order to represent the videos better, we optimize the framework of bag-of-words according to the motion salient intensity distribution and the idea of sparse coefficient reconstruction. Our architecture is trained and evaluated on the four standard video actions datasets of KTH, UCF sports, HMDB51, and UCF50, and the experimental results show that our approach performs competitively comparing with the state-of-the-art results. PMID:27293425

  14. Rainbow beads: a color coding method to facilitate high-throughput screening and optimization of one-bead one-compound combinatorial libraries.

    PubMed

    Luo, Juntao; Zhang, Hongyong; Xiao, Wenwu; Kumaresan, Pappanaicken R; Shi, Changying; Pan, Chong-Xian; Aina, Olulanu H; Lam, Kit S

    2008-01-01

    We have developed a new color-encoding method that facilitates high-throughput screening of one-bead one-compound (OBOC) combinatorial libraries. Polymer beads displaying chemical compounds or families of compounds are stained with oil-based organic dyes that are used as coding tags. The color dyes do not affect cell binding to the compounds displayed on the surface of the beads. We have applied such rainbow beads in a multiplex manner to discover and profile ligands against cell surface receptors. In the first application, a series of OBOC libraries with different scaffolds or motifs are each color-coded; small samples of each library are then combined and screened concurrently against live cells for cell attachment. Preferred libraries can be rapidly identified and selected for subsequent large-scale screenings for cell surface binding ligands. In a second application, beads with a series of peptide analogues (e.g., alanine scan) are color-coded, combined, and tested for binding against a specific cell line in a single-tissue culture well; the critical residues required for binding can be easily determined. In a third application, ligands reacting against a series of integrins are color-coded and used as a readily applied research tool to determine the integrin profile of any cell type. One major advantage of this straightforward and yet powerful method is that only an ordinary inverted microscope is needed for the analysis, instead of sophisticated (and expensive) fluorescent microscopes or flow cytometers. PMID:18558750

  15. Dynamics and evolution of dense stellar systems

    NASA Astrophysics Data System (ADS)

    Fregeau, John M.

    2004-10-01

    The research presented in this thesis comprises a theoretical study of several aspects relating to the dynamics and evolution of dense stellar systems such as globular clusters. First, I present the results of a study of mass segregation in two-component star clusters, based on a large number of numerical N-body simulations using our Monte-Carlo code. Heavy objects, which could represent stellar remnants such as neutron stars or black holes, exhibit behavior that is in quantitative agreement with simple analytical arguments. Light objects, which could represent free-floating planets or brown dwarfs, are predominantly lost from the cluster, as expected from simple analytical arguments, but may remain in the halo in larger numbers than expected. Using a recent null detection of planetary-mass microlensing events in M22, I find an upper limit of ˜25% at the 63% confidence level for the current mass fraction of M22 in the form of very low-mass objects. Turning to more realistic clusters, I present a study of the evolution of clusters containing primordial binaries, based on an enhanced version of the Monte-Carlo code that treats binary interactions via cross sections and analytical prescriptions. All models exhibit a long-lived “binary burning” phase lasting many tens of relaxation times. The structural parameters of the models during this phase match well those of most observed Galactic globular clusters. At the end of this phase, clusters that have survived tidal disruption undergo deep core collapse, followed by gravothermal oscillations. The results clearly show that the presence of even a small fraction of binaries in a cluster is sufficient to support the core against collapse significantly beyond the normal core collapse time predicted without the presence of binaries. For tidally truncated systems, collapse is delayed sufficiently that the cluster will undergo complete tidal disruption before core collapse. Moving a step beyond analytical prescriptions, I

  16. Uplink Coding

    NASA Technical Reports Server (NTRS)

    Pollara, Fabrizio; Hamkins, Jon; Dolinar, Sam; Andrews, Ken; Divsalar, Dariush

    2006-01-01

    This viewgraph presentation reviews uplink coding. The purpose and goals of the briefing are (1) Show a plan for using uplink coding and describe benefits (2) Define possible solutions and their applicability to different types of uplink, including emergency uplink (3) Concur with our conclusions so we can embark on a plan to use proposed uplink system (4) Identify the need for the development of appropriate technology and infusion in the DSN (5) Gain advocacy to implement uplink coding in flight projects Action Item EMB04-1-14 -- Show a plan for using uplink coding, including showing where it is useful or not (include discussion of emergency uplink coding).

  17. DPIS for warm dense matter

    SciTech Connect

    Kondo, K.; Kanesue, T.; Horioka, K.; Okamura, M.

    2010-05-23

    Warm Dense Matter (WDM) offers an challenging problem because WDM, which is beyond ideal plasma, is in a low temperature and high density state with partially degenerate electrons and coupled ions. WDM is a common state of matter in astrophysical objects such as cores of giant planets and white dwarfs. The WDM studies require large energy deposition into a small target volume in a shorter time than the hydrodynamical time and need uniformity across the full thickness of the target. Since moderate energy ion beams ({approx} 0.3 MeV/u) can be useful tool for WDM physics, we propose WDM generation using Direct Plasma Injection Scheme (DPIS). In the DPIS, laser ion source is connected to the Radio Frequency Quadrupole (RFQ) linear accelerator directly without the beam transport line. DPIS with a realistic final focus and a linear accelerator can produce WDM.

  18. Uniformly dense polymeric foam body

    DOEpatents

    Whinnery, Jr., Leroy

    2003-07-15

    A method for providing a uniformly dense polymer foam body having a density between about 0.013 g/cm.sup.3 to about 0.5 g/cm.sup.3 is disclosed. The method utilizes a thermally expandable polymer microsphere material wherein some of the microspheres are unexpanded and some are only partially expanded. It is shown that by mixing the two types of materials in appropriate ratios to achieve the desired bulk final density, filling a mold with this mixture so as to displace all or essentially all of the internal volume of the mold, heating the mold for a predetermined interval at a temperature above about 130.degree. C., and then cooling the mold to a temperature below 80.degree. C. the molded part achieves a bulk density which varies by less then about .+-.6% everywhere throughout the part volume.

  19. Dense inhibitory connectivity in neocortex

    PubMed Central

    Fino, Elodie; Yuste, Rafael

    2011-01-01

    Summary The connectivity diagram of neocortical circuits is still unknown, and there are conflicting data as to whether cortical neurons are wired specifically or not. To investigate the basic structure of cortical microcircuits, we use a novel two-photon photostimulation technique that enables the systematic mapping of synaptic connections with single-cell resolution. We map the inhibitory connectivity between upper layers somatostatin-positive GABAergic interneurons and pyramidal cells in mouse frontal cortex. Most, and sometimes all, inhibitory neurons are locally connected to every sampled pyramidal cell. This dense inhibitory connectivity is found at both young and mature developmental ages. Inhibitory innervation of neighboring pyramidal cells is similar, regardless of whether they are connected among themselves or not. We conclude that local inhibitory connectivity is promiscuous, does not form subnetworks and can approach the theoretical limit of a completely connected synaptic matrix. PMID:21435562

  20. Viscoelastic behavior of dense microemulsions

    NASA Astrophysics Data System (ADS)

    Cametti, C.; Codastefano, P.; D'arrigo, G.; Tartaglia, P.; Rouch, J.; Chen, S. H.

    1990-09-01

    We have performed extensive measurements of shear viscosity, ultrasonic absorption, and sound velocity in a ternary system consisting of water-decane-sodium di(2-ethylhexyl)sulfo- succinate(AOT), in the one-phase region where it forms a water-in-oil microemulsion. We observe a rapid increase of the static shear viscosity in the dense microemulsion region. Correspondingly the sound absorption shows unambiguous evidence of a viscoelastic behavior. The absorption data for various volume fractions and temperatures can be reduced to a universal curve by scaling both the absorption and the frequency by the measured static shear viscosity. The sound absorption can be interpreted as coming from the high-frequency tail of the viscoelastic relaxation, describable by a Cole-Cole relaxation formula with unusually small elastic moduli.

  1. Robust Nonlinear Neural Codes

    NASA Astrophysics Data System (ADS)

    Yang, Qianli; Pitkow, Xaq

    2015-03-01

    Most interesting natural sensory stimuli are encoded in the brain in a form that can only be decoded nonlinearly. But despite being a core function of the brain, nonlinear population codes are rarely studied and poorly understood. Interestingly, the few existing models of nonlinear codes are inconsistent with known architectural features of the brain. In particular, these codes have information content that scales with the size of the cortical population, even if that violates the data processing inequality by exceeding the amount of information entering the sensory system. Here we provide a valid theory of nonlinear population codes by generalizing recent work on information-limiting correlations in linear population codes. Although these generalized, nonlinear information-limiting correlations bound the performance of any decoder, they also make decoding more robust to suboptimal computation, allowing many suboptimal decoders to achieve nearly the same efficiency as an optimal decoder. Although these correlations are extremely difficult to measure directly, particularly for nonlinear codes, we provide a simple, practical test by which one can use choice-related activity in small populations of neurons to determine whether decoding is suboptimal or optimal and limited by correlated noise. We conclude by describing an example computation in the vestibular system where this theory applies. QY and XP was supported by a grant from the McNair foundation.

  2. The performance of dense medium processes

    SciTech Connect

    Horsfall, D.W.

    1993-12-31

    Dense medium washing in baths and cyclones is widely carried out in South Africa. The paper shows the reason for the preferred use of dense medium processes rather than gravity concentrators such as jigs. The factors leading to efficient separation in baths are listed and an indication given of the extent to which these factors may be controlled and embodied in the deployment of baths and dense medium cyclones in the planning stages of a plant.

  3. Improvements to the NASAP code

    NASA Technical Reports Server (NTRS)

    Perel, D.

    1980-01-01

    The FORTRAN code, NASAP was modified and improved for the capability of transforming the CAD-generated NASTRAN input data for DESAP II and/or DESAP I. The latter programs were developed for structural optimization.

  4. Understanding shape entropy through local dense packing

    PubMed Central

    van Anders, Greg; Klotsa, Daphne; Ahmed, N. Khalid; Engel, Michael; Glotzer, Sharon C.

    2014-01-01

    Entropy drives the phase behavior of colloids ranging from dense suspensions of hard spheres or rods to dilute suspensions of hard spheres and depletants. Entropic ordering of anisotropic shapes into complex crystals, liquid crystals, and even quasicrystals was demonstrated recently in computer simulations and experiments. The ordering of shapes appears to arise from the emergence of directional entropic forces (DEFs) that align neighboring particles, but these forces have been neither rigorously defined nor quantified in generic systems. Here, we show quantitatively that shape drives the phase behavior of systems of anisotropic particles upon crowding through DEFs. We define DEFs in generic systems and compute them for several hard particle systems. We show they are on the order of a few times the thermal energy (kBT) at the onset of ordering, placing DEFs on par with traditional depletion, van der Waals, and other intrinsic interactions. In experimental systems with these other interactions, we provide direct quantitative evidence that entropic effects of shape also contribute to self-assembly. We use DEFs to draw a distinction between self-assembly and packing behavior. We show that the mechanism that generates directional entropic forces is the maximization of entropy by optimizing local particle packing. We show that this mechanism occurs in a wide class of systems and we treat, in a unified way, the entropy-driven phase behavior of arbitrary shapes, incorporating the well-known works of Kirkwood, Onsager, and Asakura and Oosawa. PMID:25344532

  5. Dense ceramic membranes for methane conversion

    SciTech Connect

    Balachandran, U.; Mieville, R.L.; Ma, B.; Udovich, C.A.

    1996-05-01

    This report focuses on a mechanism for oxygen transport through mixed- oxide conductors as used in dense ceramic membrane reactors for the partial oxidation of methane to syngas (CO and H{sub 2}). The in-situ separation of O{sub 2} from air by the membrane reactor saves the costly cryogenic separation step that is required in conventional syngas production. The mixed oxide of choice is SrCo{sub 0.5}FeO{sub x}, which exhibits high oxygen permeability and has been shown in previous studies to possess high stability in both oxidizing and reducing conditions; in addition, it can be readily formed into reactor configurations such as tubes. An understanding of the electrical properties and the defect dynamics in this material is essential and will help us to find the optimal operating conditions for the conversion reactor. In this paper, we discuss the conductivities of the SrFeCo{sub 0.5}O{sub x} system that are dependent on temperature and partial pressure of oxygen. Based on the experimental results, a defect model is proposed to explain the electrical properties of this system. The oxygen permeability of SrFeCo{sub 0.5}O{sub x} is estimated by using conductivity data and is compared with that obtained from methane conversion reaction.

  6. Understanding shape entropy through local dense packing.

    PubMed

    van Anders, Greg; Klotsa, Daphne; Ahmed, N Khalid; Engel, Michael; Glotzer, Sharon C

    2014-11-11

    Entropy drives the phase behavior of colloids ranging from dense suspensions of hard spheres or rods to dilute suspensions of hard spheres and depletants. Entropic ordering of anisotropic shapes into complex crystals, liquid crystals, and even quasicrystals was demonstrated recently in computer simulations and experiments. The ordering of shapes appears to arise from the emergence of directional entropic forces (DEFs) that align neighboring particles, but these forces have been neither rigorously defined nor quantified in generic systems. Here, we show quantitatively that shape drives the phase behavior of systems of anisotropic particles upon crowding through DEFs. We define DEFs in generic systems and compute them for several hard particle systems. We show they are on the order of a few times the thermal energy ([Formula: see text]) at the onset of ordering, placing DEFs on par with traditional depletion, van der Waals, and other intrinsic interactions. In experimental systems with these other interactions, we provide direct quantitative evidence that entropic effects of shape also contribute to self-assembly. We use DEFs to draw a distinction between self-assembly and packing behavior. We show that the mechanism that generates directional entropic forces is the maximization of entropy by optimizing local particle packing. We show that this mechanism occurs in a wide class of systems and we treat, in a unified way, the entropy-driven phase behavior of arbitrary shapes, incorporating the well-known works of Kirkwood, Onsager, and Asakura and Oosawa. PMID:25344532

  7. Optimized periodic verification testing blended risk and performance-based MOV inservice test program an application of ASME code case OMN-1

    SciTech Connect

    Sellers, C.; Fleming, K.; Bidwell, D.; Forbes, P.

    1996-12-01

    This paper presents an application of ASME Code Case OMN-1 to the GL 89-10 Program at the South Texas Project Electric Generating Station (STPEGS). Code Case OMN-1 provides guidance for a performance-based MOV inservice test program that can be used for periodic verification testing and allows consideration of risk insights. Blended probabilistic and deterministic evaluation techniques were used to establish inservice test strategies including both test methods and test frequency. Described in the paper are the methods and criteria for establishing MOV safety significance based on the STPEGS probabilistic safety assessment, deterministic considerations of MOV performance characteristics and performance margins, the expert panel evaluation process, and the development of inservice test strategies. Test strategies include a mix of dynamic and static testing as well as MOV exercising.

  8. Discovering dense and consistent landmarks in the brain.

    PubMed

    Zhu, Dajiang; Zhang, Degang; Faraco, Carlos; Li, Kaiming; Deng, Fan; Chen, Hanbo; Jiang, Xi; Guo, Lei; Miller, L Stephen; Liu, Tianming

    2011-01-01

    The lack of consistent and reliable functionally meaningful landmarks in the brain has significantly hampered the advancement of brain imaging studies. In this paper, we use white matter fiber connectivity patterns, obtained from diffusion tensor imaging (DTI) data, as predictors of brain function, and to discover a dense, reliable and consistent map of brain landmarks within and across individuals. The general principles and our strategies are as follows. 1) Each brain landmark should have consistent structural fiber connectivity pattern across a group of subjects. We will quantitatively measure the similarity of the fiber bundles emanating from the corresponding landmarks via a novel trace-map approach, and then optimize the locations of these landmarks by maximizing the group-wise consistency of the shape patterns of emanating fiber bundles. 2) The landmark map should be dense and distributed all over major functional brain regions. We will initialize a dense and regular grid map of approximately 2000 landmarks that cover the whole brains in different subjects via linear brain image registration. 3) The dense map of brain landmarks should be reproducible and predictable in different datasets of various subject populations. The approaches and results in the above two steps are evaluated and validated via reproducibility studies. The dense map of brain landmarks can be reliably and accurately replicated in a new DTI dataset such that the landmark map can be used as a predictive model. Our experiments show promising results, and a subset of the discovered landmarks are validated via task-based fMRI. PMID:21761649

  9. QPhiX Code Generator

    Energy Science and Technology Software Center (ESTSC)

    2014-09-16

    A simple code-generator to generate the low level code kernels used by the QPhiX Library for Lattice QCD. Generates Kernels for Wilson-Dslash, and Wilson-Clover kernels. Can be reused to write other optimized kernels for Intel Xeon Phi(tm), Intel Xeon(tm) and potentially other architectures.

  10. Percolation in dense storage arrays

    NASA Astrophysics Data System (ADS)

    Kirkpatrick, Scott; Wilcke, Winfried W.; Garner, Robert B.; Huels, Harald

    2002-11-01

    As computers and their accessories become smaller, cheaper, and faster the providers of news, retail sales, and other services we now take for granted on the Internet have met their increasing computing needs by putting more and more computers, hard disks, power supplies, and the data communications linking them to each other and to the rest of the wired world into ever smaller spaces. This has created a new and quite interesting percolation problem. It is no longer desirable to fix computers, storage or switchgear which fail in such a dense array. Attempts to repair things are all too likely to make problems worse. The alternative approach, letting units “fail in place”, be removed from service and routed around, means that a data communications environment will evolve with an underlying regular structure but a very high density of missing pieces. Some of the properties of this kind of network can be described within the existing paradigm of site or bond percolation on lattices, but other important questions have not been explored. I will discuss 3D arrays of hundreds to thousands of storage servers (something which it is quite feasible to build in the next few years), and show that bandwidth, but not percolation fraction or shortest path lengths, is the critical factor affected by the “fail in place” disorder. Redundancy strategies traditionally employed in storage systems may have to be revised. Novel approaches to routing information among the servers have been developed to minimize the impact.

  11. Improving turbo-like codes using iterative decoder analysis

    NASA Technical Reports Server (NTRS)

    Divsalar, D.; Dolinar, S.; Pollara, F.

    2001-01-01

    The density evolution method is used to analyze the performance and optimize the structure of parallel and serial turbo codes, and generalized serial concatenations of mixtures of different outer and inner codes. Design examples are given for mixture codes.

  12. Design of a 100 J Dense Plasma Focus Z-pinch Device as a Portable Neutron Source

    NASA Astrophysics Data System (ADS)

    Jiang, Sheng; Higginson, Drew; Link, Anthony; Liu, Jason; Schmidt, Andrea

    2015-11-01

    The dense plasma focus (DPF) Z-pinch devices are capable of accelerating ions to high energies through MV/mm-scale electric fields. When deuterium is used as the filling gas, neutrons are generated through beam-target fusion when fast D beams collide with the bulk plasma. The neutron yield on a DPF scales favorably with current, and could be used as portable sources for active interrogation. Past DPF experiments have been optimized empirically. Here we use the particle-in-cell (PIC) code LSP to optimize a portable DPF for high neutron yield prior to building it. In this work, we are designing a DPF device with about 100 J of energy which can generate 106 - 107 neutrons. The simulations are run in the fluid mode for the rundown phase and are switched to kinetic to capture the anomalous resistivity and beam acceleration process during the pinch. A scan of driver parameters, anode geometries and gas pressures are studied to maximize the neutron yield. The optimized design is currently under construction. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and supported by the Laboratory Directed Research and Development Program (15-ERD-034) at LLNL.

  13. Dense packings of polyhedra: Platonic and Archimedean solids

    NASA Astrophysics Data System (ADS)

    Torquato, S.; Jiao, Y.

    2009-10-01

    Understanding the nature of dense particle packings is a subject of intense research in the physical, mathematical, and biological sciences. The preponderance of previous work has focused on spherical particles and very little is known about dense polyhedral packings. We formulate the problem of generating dense packings of nonoverlapping, nontiling polyhedra within an adaptive fundamental cell subject to periodic boundary conditions as an optimization problem, which we call the adaptive shrinking cell (ASC) scheme. This optimization problem is solved here (using a variety of multiparticle initial configurations) to find the dense packings of each of the Platonic solids in three-dimensional Euclidean space R3 , except for the cube, which is the only Platonic solid that tiles space. We find the densest known packings of tetrahedra, icosahedra, dodecahedra, and octahedra with densities 0.823…, 0.836…, 0.904…, and 0.947…, respectively. It is noteworthy that the densest tetrahedral packing possesses no long-range order. Unlike the densest tetrahedral packing, which must not be a Bravais lattice packing, the densest packings of the other nontiling Platonic solids that we obtain are their previously known optimal (Bravais) lattice packings. We also derive a simple upper bound on the maximal density of packings of congruent nonspherical particles and apply it to Platonic solids, Archimedean solids, superballs, and ellipsoids. Provided that what we term the “asphericity” (ratio of the circumradius to inradius) is sufficiently small, the upper bounds are relatively tight and thus close to the corresponding densities of the optimal lattice packings of the centrally symmetric Platonic and Archimedean solids. Our simulation results, rigorous upper bounds, and other theoretical arguments lead us to the conjecture that the densest packings of Platonic and Archimedean solids with central symmetry are given by their corresponding densest lattice packings. This can be

  14. Computer Code

    NASA Technical Reports Server (NTRS)

    1985-01-01

    COSMIC MINIVER, a computer code developed by NASA for analyzing aerodynamic heating and heat transfer on the Space Shuttle, has been used by Marquardt Company to analyze heat transfer on Navy/Air Force missile bodies. The code analyzes heat transfer by four different methods which can be compared for accuracy. MINIVER saved Marquardt three months in computer time and $15,000.

  15. DNA codes

    SciTech Connect

    Torney, D. C.

    2001-01-01

    We have begun to characterize a variety of codes, motivated by potential implementation as (quaternary) DNA n-sequences, with letters denoted A, C The first codes we studied are the most reminiscent of conventional group codes. For these codes, Hamming similarity was generalized so that the score for matched letters takes more than one value, depending upon which letters are matched [2]. These codes consist of n-sequences satisfying an upper bound on the similarities, summed over the letter positions, of distinct codewords. We chose similarity 2 for matches of letters A and T and 3 for matches of the letters C and G, providing a rough approximation to double-strand bond energies in DNA. An inherent novelty of DNA codes is 'reverse complementation'. The latter may be defined, as follows, not only for alphabets of size four, but, more generally, for any even-size alphabet. All that is required is a matching of the letters of the alphabet: a partition into pairs. Then, the reverse complement of a codeword is obtained by reversing the order of its letters and replacing each letter by its match. For DNA, the matching is AT/CG because these are the Watson-Crick bonding pairs. Reversal arises because two DNA sequences form a double strand with opposite relative orientations. Thus, as will be described in detail, because in vitro decoding involves the formation of double-stranded DNA from two codewords, it is reasonable to assume - for universal applicability - that the reverse complement of any codeword is also a codeword. In particular, self-reverse complementary codewords are expressly forbidden in reverse-complement codes. Thus, an appropriate distance between all pairs of codewords must, when large, effectively prohibit binding between the respective codewords: to form a double strand. Only reverse-complement pairs of codewords should be able to bind. For most applications, a DNA code is to be bi-partitioned, such that the reverse-complementary pairs are separated

  16. Radiative properties of hot dense matter III. Proceedings. Meeting on Radiative Properties of Hot Dense Matter 1996.

    NASA Astrophysics Data System (ADS)

    Lee, R. W.

    1997-12-01

    The papers consider the radiative properties of hot dense matter. Numerous contributions were directed at understanding the behavior of plasma not in local thermodynamics equilibrium (NLTE). Contributors have analyzed warm dense matter, inertial confinement fusion implosion cores, femtosecond pulse laser generated plasmas, colliding plasmas, and nanosecond long pulse laser generated plasmas. In all of these reports the level of sophistication is advanced, with effects of nonMaxwellian distributions, laser modified transitions, polarization effects and mind-numbing atomic structure models being presented. To ascertain the validity of these NLTE kinetics codes two kinetics code comparisons are reported, which attempt to provide insight into the workings of the kinetics models. The LTE work is directed largely towards the area of opacity studies where both experimental and theoretical efforts were reported. Moreover, the topics of spectral line shapes and the plasma microfields, are given a strong airing. Recent advances and the addition of new effects including magnetic fields, laser pumping, and continuum perturbing states are presented. Finally, many of the contributors present a detailed discussion of the instrumentation which are central to the spectroscopy, providing new paths for future experimental and theoretical advances.

  17. Polarimetric active imaging in dense fog

    NASA Astrophysics Data System (ADS)

    Bernier, Robert; Cao, Xiaoying; Tremblay, Grégoire; Roy, Gilles

    2015-10-01

    Operation under degraded visual environment (DVE) presents important strategic advantages. 3D mapping has been performed under DVE and good quality images have been obtained through DVE with active imaging systems. In these applications, the presence of fog clouds degrades the quality of the remotely sensed signal or even renders the operation totally impossible. In view of making the active imaging method more robust against dense fog, the use of polarimetry is herein studied. Spherical particles typical of fog do not depolarize incident polarized light in the backscattering (180°) direction. So, in principle, there should be less dazzling caused by aerosols for active imaging systems operating using the secondary polarization. However, strong depolarization still occurs at angles close to 180°. The greater the ratio of size to wavelength, the closer to 180° will the depolarization occur. When the cloud optical depth is small, the major scattering events seen by an active camera are the single backscattering events. However, when the optical depth of the cloud is higher than 1, multiple scattering becomes more important and causes depolarization due to the backscattering around 180°. The physics of this process will be discussed. Experimental results supporting the analysis will be presented. Those experimental results were obtained under controlled environment using the DRDC-Valcartier aerosol chamber. The experimental method herein proposed is based upon the use of ICCD range gated cameras wherein gate width and gate location may be varied on the fly. The optimal conditions for the use of these devices in view of obtaining the best image contrast are experimentally studied and reported in this paper.

  18. Atomic Transitions in Dense Plasmas

    NASA Astrophysics Data System (ADS)

    Murillo, Michael Sean

    Motivation for the study of hot, dense ( ~solid density) plasmas has historically been in connection with stellar interiors. In recent years, however, there has been a growing interest in such plasmas due to their relevance to short wavelength (EUV and x-ray) lasers, inertial confinement fusion, and optical harmonic generation. In constrast to the stellar plasmas, these laboratory plasmas are typically composed of high-z elements and are not in thermal equilibrium. Descriptions of nonthermal plasma experiments must necessarily involve the consideration of the various atomic processes and the rates at which they occur. Traditionally, the rates of collisional atomic processes are calculated by considering a binary collision picture. For example, a single electron may be taken to collisionally excite an ion. A cross section may be defined for this process and, multiplying by a flux, the rate may be obtained. In a high density plasma this binary picture clearly breaks down as the electrons no longer act independently of each other. The cross section is ill-defined in this regime and another approach is needed to obtain rates. In this thesis an approach based on computing rates without recourse to a cross section is presented. In this approach, binary collisions are replaced by stochastic density fluctuations. It is then these density fluctuations which drive transitions in the ions. Furthermore, the oscillator strengths for the transitions are computed in screened Coulomb potentials which reflect the average polarization of the plasma near the ion. Numerical computations are presented for the collisional ionization rate. The effects of screening in the plasma -ion interaction are investigated for He^+ ions in a plasma near solid density. It is shown that dynamic screening plays an important role in this process. Then, density effects in the oscillator strength are explored for both He^+ and Ar^{+17}. Approximations which introduce a nonorthogonality between the initial

  19. Neutrino Propagation in Dense Magnetized Matter

    NASA Astrophysics Data System (ADS)

    Arbuzova, E. V.; Lobanov, A. E.; Murchikova, E. M.

    2009-01-01

    We obtained a complete system of solutions of the Dirac-Pauli equation for a massive neutrino interacting with dense matter and strong electromagnetic field. We demonstrated that these solutions can describe precession of the neutrino spin.

  20. Wide Variation Seen in 'Dense' Breast Diagnoses

    MedlinePlus

    ... defined mammography patients' breasts as dense. Higher breast density is a risk factor for breast cancer, experts ... could have implications for the so-called breast density notification laws that have been passed in about ...

  1. Dissociation energy of molecules in dense gases

    NASA Technical Reports Server (NTRS)

    Kunc, J. A.

    1992-01-01

    A general approach is presented for calculating the reduction of the dissociation energy of diatomic molecules immersed in a dense (n = less than 10 exp 22/cu cm) gas of molecules and atoms. The dissociation energy of a molecule in a dense gas differs from that of the molecule in vacuum because the intermolecular forces change the intramolecular dynamics of the molecule, and, consequently, the energy of the molecular bond.

  2. Magnetic Phases in Dense Quark Matter

    SciTech Connect

    Incera, Vivian de la

    2007-10-26

    In this paper I discuss the magnetic phases of the three-flavor color superconductor. These phases can take place at different field strengths in a highly dense quark system. Given that the best natural candidates for the realization of color superconductivity are the extremely dense cores of neutron stars, which typically have very large magnetic fields, the magnetic phases here discussed could have implications for the physics of these compact objects.

  3. Dense loading of catalyst improves hydrotreater performance

    SciTech Connect

    Nooy, F.M.

    1984-11-12

    This paper discusses the advantages of increased capacity and improved catalyst/oil contact in existing hydrotreating units. The similarities between catalyst loading and other material processes are reviewed. Catalyst bed activity is examined. Dense loading systems are reviewed in detail. Over the last years, many refiners have gained experience with the benefits of dense loading techniques, and these techniques are gaining more and more acceptance.

  4. Dynamical theory of dense groups of galaxies

    NASA Technical Reports Server (NTRS)

    Mamon, Gary A.

    1990-01-01

    It is well known that galaxies associate in groups and clusters. Perhaps 40% of all galaxies are found in groups of 4 to 20 galaxies (e.g., Tully 1987). Although most groups appear to be so loose that the galaxy interactions within them ought to be insignificant, the apparently densest groups, known as compact groups appear so dense when seen in projection onto the plane of the sky that their members often overlap. These groups thus appear as dense as the cores of rich clusters. The most popular catalog of compact groups, compiled by Hickson (1982), includes isolation among its selection critera. Therefore, in comparison with the cores of rich clusters, Hickson's compact groups (HCGs) appear to be the densest isolated regions in the Universe (in galaxies per unit volume), and thus provide in principle a clean laboratory for studying the competition of very strong gravitational interactions. The $64,000 question here is then: Are compact groups really bound systems as dense as they appear? If dense groups indeed exist, then one expects that each of the dynamical processes leading to the interaction of their member galaxies should be greatly enhanced. This leads us to the questions: How stable are dense groups? How do they form? And the related question, fascinating to any theorist: What dynamical processes predominate in dense groups of galaxies? If HCGs are not bound dense systems, but instead 1D change alignments (Mamon 1986, 1987; Walke & Mamon 1989) or 3D transient cores (Rose 1979) within larger looser systems of galaxies, then the relevant question is: How frequent are chance configurations within loose groups? Here, the author answers these last four questions after comparing in some detail the methods used and the results obtained in the different studies of dense groups.

  5. Fabric variables in dense sheared suspensions

    NASA Astrophysics Data System (ADS)

    Radjai, Farhang; Amarsid, Lhassan; Delenne, Jean-Yves

    The rheology of granular flows and dense suspensions can be described in terms of their effective shear and bulk viscosities as a function of packing fraction. Using stress partition and equivalence between frictional and viscous descriptions in the dense state, we show that the effective viscosities can be expressed in terms of the force-network anisotropy. This is supported by our extensive DEM-LBM simulations for a broad range of inertial and viscous parameters.

  6. Trellis Decoding Complexity of Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Kiely, A. B.; McEliece, R. J.; Lin, W.; Ekroot, L.; Dolinar, S.

    1995-01-01

    We consider the problem of finding a trellis for a linear block code that minimizes one or more measures of trellis complexity. The domain of optimization may be different permutations of the same code, or different codes with the same parameters. Constraints on trellises, including relationships between the minimal trellis of a code and that of the dual code, are used to derive bounds on complexity. We define a partial ordering on trellises: if a trellis is optimum with respect to this partial ordering, it has the desirable property that it simultaneously minimizes all of the complexity measures examined. We examine properties of such optimal trellises and give examples of optimal permutations of codes, most notably the (48,24,12) quadratic residue code.

  7. METHOD OF PRODUCING DENSE CONSOLIDATED METALLIC REGULUS

    DOEpatents

    Magel, T.T.

    1959-08-11

    A methcd is presented for reducing dense metal compositions while simultaneously separating impurities from the reduced dense metal and casting the reduced parified dense metal, such as uranium, into well consolidated metal ingots. The reduction is accomplished by heating the dense metallic salt in the presence of a reducing agent, such as an alkali metal or alkaline earth metal in a bomb type reacting chamber, while applying centrifugal force on the reacting materials. Separation of the metal from the impurities is accomplished essentially by the incorporation of a constricted passageway at the vertex of a conical reacting chamber which is in direct communication with a collecting chamber. When a centrifugal force is applled to the molten metal and slag from the reduction in a direction collinear with the axis of the constricted passage, the dense molten metal is forced therethrough while the less dense slag is retained within the reaction chamber, resulting in a simultaneous separation of the reduced molten metal from the slag and a compacting of the reduced metal in a homogeneous mass.

  8. Speech coding

    SciTech Connect

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  9. Computational experience with a dense column feature for interior-point methods

    SciTech Connect

    Wenzel, M.; Czyzyk, J.; Wright, S.

    1997-08-01

    Most software that implements interior-point methods for linear programming formulates the linear algebra at each iteration as a system of normal equations. This approach can be extremely inefficient when the constraint matrix has dense columns, because the density of the normal equations matrix is much greater than the constraint matrix and the system is expensive to solve. In this report the authors describe a more efficient approach for this case, that involves handling the dense columns by using a Schur-complement method and conjugate gradient interaction. The authors report numerical results with the code PCx, into which the technique now has been incorporated.

  10. Error coding simulations

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1993-01-01

    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.

  11. Error coding simulations

    NASA Astrophysics Data System (ADS)

    Noble, Viveca K.

    1993-11-01

    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.

  12. Maxmin lambda allocation for dense wavelength-division-multiplexing networks

    NASA Astrophysics Data System (ADS)

    Tsai, Wei K.; Ros, Jordi

    2002-08-01

    We present a heuristic for solving the discrete maximum-minimum (maxmin) rates for dense WDM- (DWDM-) based optical subnetworks. Discrete maxmin allocation is proposed here as the preferred way of assigning wavelengths to the flows found to be suitable for lightpath switching. The discrete maxmin optimality condition is shown to be a unifying principle underlying both the continuous maxmin and discrete maxmin optimality conditions. Among the many discrete maxmin solutions for each assignment problem, lexicographic optimal solutions can be argued to be the best in the true sense of maxmin. However, the problem of finding lexicographic optimal solutions is known to be NP-complete (NP is the class that a nondeterministic Turing machine accepts in polynomial time). The heuristic proposed here is tested against all possible networks such that |Gj + jW| £ 10, where G and W are the set of links and the set of flows of the network, respectively. From 1,084,112 possible networks, the heuristic produces the exact lexicographic solutions with 99.8% probability. Furthermore, for 0.2% cases in which the solutions are nonoptimal, 99.8% of these solutions are within the minimal possible distance from the true lexicographic optimal solutions.

  13. Performance Assessment of Model-Based Optimal Feedforward and Feedback Current Profile Control in NSTX-U using the TRANSP Code

    NASA Astrophysics Data System (ADS)

    Ilhan, Z.; Wehner, W. P.; Schuster, E.; Boyer, M. D.; Gates, D. A.; Gerhardt, S.; Menard, J.

    2015-11-01

    Active control of the toroidal current density profile is crucial to achieve and maintain high-performance, MHD-stable plasma operation in NSTX-U. A first-principles-driven, control-oriented model describing the temporal evolution of the current profile has been proposed earlier by combining the magnetic diffusion equation with empirical correlations obtained at NSTX-U for the electron density, electron temperature, and non-inductive current drives. A feedforward + feedback control scheme for the requlation of the current profile is constructed by embedding the proposed nonlinear, physics-based model into the control design process. Firstly, nonlinear optimization techniques are used to design feedforward actuator trajectories that steer the plasma to a desired operating state with the objective of supporting the traditional trial-and-error experimental process of advanced scenario planning. Secondly, a feedback control algorithm to track a desired current profile evolution is developed with the goal of adding robustness to the overall control scheme. The effectiveness of the combined feedforward + feedback control algorithm for current profile regulation is tested in predictive simulations carried out in TRANSP. Supported by PPPL.

  14. MPQC: Performance Analysis and Optimization

    SciTech Connect

    Sarje, Abhinav; Williams, Samuel; Bailey, David

    2012-11-30

    MPQC (Massively Parallel Quantum Chemistry) is a widely used computational quantum chemistry code. It is capable of performing a number of computations commonly occurring in quantum chemistry. In order to achieve better performance of MPQC, in this report we present a detailed performance analysis of this code. We then perform loop and memory access optimizations, and measure performance improvements by comparing the performance of the optimized code with that of the original MPQC code. We observe that the optimized MPQC code achieves a significant improvement in the performance through a better utilization of vector processing and memory hierarchies.

  15. Formation and evolution of black holes in dense star clusters

    NASA Astrophysics Data System (ADS)

    Goswami, Sanghamitra

    Using supercomputer simulations combining stellar dynamics and stellar evolution, we have studied various problems related to the existence of black holes in dense star clusters. We consider both stellar and intermediate-mass black holes, and we focus on massive, dense star clusters, such as old globular clusters and young, so called "super star clusters." The first problem concerns the formation of intermediate-mass black holes in young clusters through the runaway collision instability. A promising mechanism to form intermediate-mass black holes (IMBHs) is runaway mergers in dense star clusters, where main-sequence stars collide re- peatedly and form a very massive star (VMS), which then collapses to a black hole (BH). Here we study the effects of primordial mass segregation and the importance of the stellar initial mass function (IMF) on the runaway growth of VMSs using a dynamical Monte Carlo code to model systems with N as high as 10^6 stars. Our Monte Carlo code includes an explicittreatment of all stellar collisions. We place special emphasis on the possibility of top-heavy IMFs, as observed in some very young massive clusters. We find that both primordial mass segregation and the shape of the IMF affect the rate of core collapse of star clusters and thus the time of the runaway. When we include primordial mass segregation we generally see a decrease in core collapse time (tcc). Although for smaller degrees of primordial mass segregation this decrease in tcc is mostly due to the change in the density profile of the cluster, for highly mass-segregated (primordial) clusters, it is the increase in the average mass in the core which reduces the central relaxation time, decreasing tcc. Finally, flatter IMFs generally increase the average mass in the whole cluster, which increases tcc. For the range of IMFs investigated in this thesis, this increase in tcc is to some degree balanced by stellar collisions, which accelerate core collapse. Thus there is no

  16. TDRSS telecommunication system PN code analysis

    NASA Technical Reports Server (NTRS)

    Gold, R.

    1977-01-01

    The pseudonoise (PN) code library for the Tracking and Data Relay Satellite System (TDRSS) Services was defined and described. The code library was chosen to minimize user transponder hardware requirements and optimize system performance. Special precautions were taken to insure sufficient code phase separation to minimize cross-correlation sidelobes, and to avoid the generation of spurious code components which would interfere with system performance.

  17. QR Codes

    ERIC Educational Resources Information Center

    Lai, Hsin-Chih; Chang, Chun-Yen; Li, Wen-Shiane; Fan, Yu-Lin; Wu, Ying-Tien

    2013-01-01

    This study presents an m-learning method that incorporates Integrated Quick Response (QR) codes. This learning method not only achieves the objectives of outdoor education, but it also increases applications of Cognitive Theory of Multimedia Learning (CTML) (Mayer, 2001) in m-learning for practical use in a diverse range of outdoor locations. When…

  18. Combined trellis coding with asymmetric modulations

    NASA Technical Reports Server (NTRS)

    Divsalar, D.; Simon, M. K.

    1985-01-01

    The use of asymmetric signal constellations combined with optimized trellis coding to improve the performance of coded systems without increasing the average or peak power, or changing the bandwidth constraints of a system is discussed. The trellis code, asymmetric signal set, and Viterbi decoder of the system model are examined. The procedures for assigning signals to state transitions of the trellis code are described; the performance of the trellis coding system is evaluated. Examples of AM, QAM, and MPSK modulations with short memory trellis codes are presented.

  19. Coalescence preference in densely packed microbubbles

    SciTech Connect

    Kim, Yeseul; Lim, Su Jin; Gim, Bopil; Weon, Byung Mook

    2015-01-13

    A bubble merged from two parent bubbles with different size tends to be placed closer to the larger parent. This phenomenon is known as the coalescence preference. Here we demonstrate that the coalescence preference can be blocked inside a densely packed cluster of bubbles. We utilized high-speed high-resolution X-ray microscopy to clearly visualize individual coalescence events inside densely packed microbubbles with a local packing fraction of ~40%. Thus, the surface energy release theory predicts an exponent of 5 in a relation between the relative coalescence position and the parent size ratio, whereas our observation for coalescence in densely packed microbubbles shows a different exponent of 2. We believe that this result would be important to understand the reality of coalescence dynamics in a variety of packing situations of soft matter.

  20. Coalescence preference in densely packed microbubbles

    DOE PAGESBeta

    Kim, Yeseul; Lim, Su Jin; Gim, Bopil; Weon, Byung Mook

    2015-01-13

    A bubble merged from two parent bubbles with different size tends to be placed closer to the larger parent. This phenomenon is known as the coalescence preference. Here we demonstrate that the coalescence preference can be blocked inside a densely packed cluster of bubbles. We utilized high-speed high-resolution X-ray microscopy to clearly visualize individual coalescence events inside densely packed microbubbles with a local packing fraction of ~40%. Thus, the surface energy release theory predicts an exponent of 5 in a relation between the relative coalescence position and the parent size ratio, whereas our observation for coalescence in densely packed microbubblesmore » shows a different exponent of 2. We believe that this result would be important to understand the reality of coalescence dynamics in a variety of packing situations of soft matter.« less

  1. Supplemental screening sonography in dense breasts.

    PubMed

    Berg, Wendie A

    2004-09-01

    In single-center trials across 42,838 examinations, 150 (0.35%) cancers were identified only sonographically in average-risk women. Over 90% of the 126 women with sonographically depicted cancers had dense or heterogeneously dense parenchyma. Of the 150 cancers, 141 (94%) were invasive, with a mean size of 9 to 11 mm across the series. Over 90% were node-negative. A3-year multicenter trial of screening sonography in high-risk women, blinded to the results of mammography, opened for enrollment April 2004,funded by the Avon Foundation and National Cancer Institute through the American College of Radiology Imaging Network (ACRIN Protocol 6666). If the trial is successful,the results will provide a rational basis for supplemental screening sonography in women with dense breasts. PMID:15337420

  2. [Metabolic syndrome and small dense LDL].

    PubMed

    Yoshino, Gen

    2006-12-01

    Due to the recent westernization of our lifestyle, it is speculated that the prevalence of metabolic syndrome in the young generation will increase in Japan. Different from Western populations, because of our lifestyle as "farmers" from ancient times, excess energy has been stored outside of the body, and the accumulation of visceral fat might have serious adverse effects on glucose and lipid metabolism. Therefore, we must carefully diagnose and treat patients with metabolic syndrome, which is diagnosed based on the existence of visceral obesity. On the other hand, much attention has been paid recently to the atherogenicity of small dense LDL. In this chapter I will introduce a newly established method for estimating the plasma concentration of small dense LDL-cholesterol. Furthermore, the relationship between subclinical atherosclerosis and small dense LDL in metabolic syndrome will be discussed. PMID:17265899

  3. Coalescence preference in densely packed microbubbles

    PubMed Central

    Kim, Yeseul; Lim, Su Jin; Gim, Bopil; Weon, Byung Mook

    2015-01-01

    A bubble merged from two parent bubbles with different size tends to be placed closer to the larger parent. This phenomenon is known as the coalescence preference. Here we demonstrate that the coalescence preference can be blocked inside a densely packed cluster of bubbles. We utilized high-speed high-resolution X-ray microscopy to clearly visualize individual coalescence events inside densely packed microbubbles with a local packing fraction of ~40%. The surface energy release theory predicts an exponent of 5 in a relation between the relative coalescence position and the parent size ratio, whereas our observation for coalescence in densely packed microbubbles shows a different exponent of 2. We believe that this result would be important to understand the reality of coalescence dynamics in a variety of packing situations of soft matter. PMID:25583640

  4. Dense Gas-Star Systems: Evolution of Supermassive Stars

    NASA Astrophysics Data System (ADS)

    Amaro-Seoane, P.; Spurzem, R.

    In the 60s and 70s super-massive central objects (from now onwards SMOs) were thought to be the main source of active galactic nuclei (AGNs) characteristics (luminosities of L ≅1012 Lodot). The release of gravitational binding energy by the accretion of material on to an SMO in the range of 107 - 109 Modot has been suggested to be the primary powerhouse (Lynden-Bell 1969). That rather exotic idea in early time has become common sense nowadays. Not only our own galaxy harbours a few million-solar mass black hole (Genzel 2001) but also many of other non-active galaxies show kinematic and gas-dynamic evidence of these objects (Magorrian et al. 1998) The concept of central super-massive stars (SMSs henceforth) (cal M ≥ 5 × 104 Modot, where cal M is the mass of the SMS) embedded in dense stellar systems was suggested as a possible explanation for high- energy emissions phenomena occurring in AGNs and quasars (Vilkoviski 1976, Hara 1978), such as X-ray emissions (Bahcall and Ostriker, 1975). SMSs and super-massive black holes (SMBHs) are two possibilities to explain the nature of SMOs, and SMSs may be an intermediate step towards the formation of SMBHs (Rees 1984). In this paper we give the equations that describe the dynamics of such a dense star-gas system which are the basis for the code that will be used in a prochain future to simulate this scenario. We also briefly draw the mathematical fundamentals of the code.

  5. HERCULES: A Pattern Driven Code Transformation System

    SciTech Connect

    Kartsaklis, Christos; Hernandez, Oscar R; Hsu, Chung-Hsing; Ilsche, Thomas; Joubert, Wayne; Graham, Richard L

    2012-01-01

    New parallel computers are emerging, but developing efficient scientific code for them remains difficult. A scientist must manage not only the science-domain complexity but also the performance-optimization complexity. HERCULES is a code transformation system designed to help the scientist to separate the two concerns, which improves code maintenance, and facilitates performance optimization. The system combines three technologies, code patterns, transformation scripts and compiler plugins, to provide the scientist with an environment to quickly implement code transformations that suit his needs. Unlike existing code optimization tools, HERCULES is unique in its focus on user-level accessibility. In this paper we discuss the design, implementation and an initial evaluation of HERCULES.

  6. Dense packing: surgical indications and technical considerations.

    PubMed

    Farjo, Bessam; Farjo, Nilofer

    2013-08-01

    Dense packing is the philosophy of fitting more than 30 to 35 follicular unit grafts per square centimeter in one operation. The aim is to produce a more even, consistent, and natural looking flow of hair after just one procedure. Although desirable in principle, not all patients are suitable candidates nor is it possible to achieve in certain patients (eg, coarse or curly hair). Patients who have sufficient donor availability, reasonably stable hair loss, and high hair-to-skin color ratios are the ideal candidates. The authors highlight their philosophies and strategies for dense packing. PMID:24017984

  7. The Galactic Dense Gas Distribution and Properties

    NASA Astrophysics Data System (ADS)

    Glenn, Jason

    2015-08-01

    As the nearest spiral galaxy, the Milky Way provides a foundation for understanding galactic astrophysics. However, our position within the Galactic plane makes it challenging to decipher the detailed disk structure. The Galactic distribution of dense gas is relatively poorly known; thus, it is difficult to assess models of galaxy evolution by comparison to the Milky Way. Furthermore, fundamental aspects of star formation remain unknown, such as why the stellar and star cluster initial mass functions appear to be ubiquitous.Sub/millimeter dust continuum surveys, coupled with molecular gas surveys, are revealing the 3D distribution and properties of dense, star-forming gas throughout the disk. Here we report on the use of BGPS and Hi-GAL. BGPS is a 1.1 mm survey of the 1st Galactic quadrant and some lines of sight in the 2nd quadrant, totalling 200 deg2. We developed a technique using the Galactic rotation curve to derive distance probability density functions (DPDFs) to molecular cloud structures identified with continuum surveys. DPDFs combine vLSR measures from dense gas tracers and 13CO with distance discriminators, such as 8 μm extinction, HI self absorption, and (l, b, vLSR) associations with objects of known distances. Typical uncertainties are σdist ≤ 1 kpc for 1,710 BGPS objects with well-constrained distances.From DPDFs we derived the dense gas distribution and the dense gas mass function. We find evidence for dense gas in and between putative spiral arms. A log-normal distribution describes the mass function, which ranges from cores to clouds, but is primarily comprised of clumps. High-mass power laws do not fit the entire data set well, although power-law behavior emerges for sources nearer than 6.5 kpc (α = 2.0±0.1) and for objects between 2 kpc and 10 kpc (α = 1.9±0.1). The power law indices are generally between those of GMC and the stellar IMF. We have begun to apply this approach to the Hi-GAL (70 - 500 μm). With coverage of the entire

  8. ETR/ITER systems code

    SciTech Connect

    Barr, W.L.; Bathke, C.G.; Brooks, J.N.; Bulmer, R.H.; Busigin, A.; DuBois, P.F.; Fenstermacher, M.E.; Fink, J.; Finn, P.A.; Galambos, J.D.; Gohar, Y.; Gorker, G.E.; Haines, J.R.; Hassanein, A.M.; Hicks, D.R.; Ho, S.K.; Kalsi, S.S.; Kalyanam, K.M.; Kerns, J.A.; Lee, J.D.; Miller, J.R.; Miller, R.L.; Myall, J.O.; Peng, Y-K.M.; Perkins, L.J.; Spampinato, P.T.; Strickler, D.J.; Thomson, S.L.; Wagner, C.E.; Willms, R.S.; Reid, R.L.

    1988-04-01

    A tokamak systems code capable of modeling experimental test reactors has been developed and is described in this document. The code, named TETRA (for Tokamak Engineering Test Reactor Analysis), consists of a series of modules, each describing a tokamak system or component, controlled by an optimizer/driver. This code development was a national effort in that the modules were contributed by members of the fusion community and integrated into a code by the Fusion Engineering Design Center. The code has been checked out on the Cray computers at the National Magnetic Fusion Energy Computing Center and has satisfactorily simulated the Tokamak Ignition/Burn Experimental Reactor II (TIBER) design. A feature of this code is the ability to perform optimization studies through the use of a numerical software package, which iterates prescribed variables to satisfy a set of prescribed equations or constraints. This code will be used to perform sensitivity studies for the proposed International Thermonuclear Experimental Reactor (ITER). 22 figs., 29 tabs.

  9. Coding for spread-spectrum communications networks

    NASA Astrophysics Data System (ADS)

    Kim, Bal G.

    1987-03-01

    The multiple-access capability of a frequency-hopp packet radio network is investigated from a coding point of view. The achievable region of code rate and channel traffic and the normalized throughput are considered as performance measures. We model the communication system from the modulator input to the demodulator output as an I-user interference channel, and evaluate the asymptotic performance of various coding schemes for channels with perfect side information, no side information, and imperfect side information. The coding schemes being considered are Reed-Solomon codes, concatenated codes, and parallel decoding schemes. We derive the optimal code rate and the optimal channel traffic at which the normalized throughput is maximized, and from these optimum values the asymptotic maximum normalized throughput is derived. The results are then compared with channel capacities.

  10. ION BEAM HEATED TARGET SIMULATIONS FOR WARM DENSE MATTER PHYSICS AND INERTIAL FUSION ENERGY

    SciTech Connect

    Barnard, J.J.; Armijo, J.; Bailey, D.S.; Friedman, A.; Bieniosek, F.M.; Henestroza, E.; Kaganovich, I.; Leung, P.T.; Logan, B.G.; Marinak, M.M.; More, R.M.; Ng, S.F.; Penn, G.E.; Perkins, L.J.; Veitzer, S.; Wurtele, J.S.; Yu, S.S.; Zylstra, A.B.

    2008-08-01

    Hydrodynamic simulations have been carried out using the multi-physics radiation hydrodynamics code HYDRA and the simplified one-dimensional hydrodynamics code DISH. We simulate possible targets for a near-term experiment at LBNL (the Neutralized Drift Compression Experiment, NDCX) and possible later experiments on a proposed facility (NDCX-II) for studies of warm dense matter and inertial fusion energy related beam-target coupling. Simulations of various target materials (including solids and foams) are presented. Experimental configurations include single pulse planar metallic solid and foam foils. Concepts for double-pulsed and ramped-energy pulses on cryogenic targets and foams have been simulated for exploring direct drive beam target coupling, and concepts and simulations for collapsing cylindrical and spherical bubbles to enhance temperature and pressure for warm dense matter studies are described.

  11. Ion Beam Heated Target Simulations for Warm Dense Matter Physics and Inertial Fusion Energy

    SciTech Connect

    Barnard, J J; Armijo, J; Bailey, D S; Friedman, A; Bieniosek, F M; Henestroza, E; Kaganovich, I; Leung, P T; Logan, B G; Marinak, M M; More, R M; Ng, S F; Penn, G E; Perkins, L J; Veitzer, S; Wurtele, J S; Yu, S S; Zylstra, A B

    2008-08-12

    Hydrodynamic simulations have been carried out using the multi-physics radiation hydrodynamics code HYDRA and the simplified one-dimensional hydrodynamics code DISH. We simulate possible targets for a near-term experiment at LBNL (the Neutralized Drift Compression Experiment, NDCX) and possible later experiments on a proposed facility (NDCX-II) for studies of warm dense matter and inertial fusion energy related beam-target coupling. Simulations of various target materials (including solids and foams) are presented. Experimental configurations include single pulse planar metallic solid and foam foils. Concepts for double-pulsed and ramped-energy pulses on cryogenic targets and foams have been simulated for exploring direct drive beam target coupling, and concepts and simulations for collapsing cylindrical and spherical bubbles to enhance temperature and pressure for warm dense matter studies are described.

  12. Ion beam heated target simulations for warm dense matter physics and inertial fusion energy

    NASA Astrophysics Data System (ADS)

    Barnard, J. J.; Armijo, J.; Bailey, D. S.; Friedman, A.; Bieniosek, F. M.; Henestroza, E.; Kaganovich, I.; Leung, P. T.; Logan, B. G.; Marinak, M. M.; More, R. M.; Ng, S. F.; Penn, G. E.; Perkins, L. J.; Veitzer, S.; Wurtele, J. S.; Yu, S. S.; Zylstra, A. B.

    2009-07-01

    Hydrodynamic simulations have been carried out using the multi-physics radiation hydrodynamics code HYDRA and the simplified one-dimensional hydrodynamics code DISH. We simulate possible targets for a near-term experiment at LBNL (the Neutralized Drift Compression Experiment, NDCX) and possible later experiments on a proposed facility (NDCX-II) for studies of warm dense matter and inertial fusion energy-related beam-target coupling. Simulations of various target materials (including solids and foams) are presented. Experimental configurations include single-pulse planar metallic solid and foam foils. Concepts for double-pulsed and ramped-energy pulses on cryogenic targets and foams have been simulated for exploring direct drive beam-target coupling, and concepts and simulations for collapsing cylindrical and spherical bubbles to enhance temperature and pressure for warm dense matter studies.

  13. a Novel Removal Method for Dense Stripes in Remote Sensing Images

    NASA Astrophysics Data System (ADS)

    Liu, Xinxin; Shen, Huanfeng; Yuan, Qiangqiang; Zhang, Liangpei; Cheng, Qing

    2016-06-01

    In remote sensing images, the common existing stripe noise always severely affects the imaging quality and limits the related subsequent application, especially when it is with high density. To well process the dense striped data and ensure a reliable solution, we construct a statistical property based constraint in our proposed model and use it to control the whole destriping process. The alternating direction method of multipliers (ADMM) is applied in this work to solve and accelerate the model optimization. Experimental results on real data with different kinds of dense stripe noise demonstrate the effectiveness of the proposed method in terms of both qualitative and quantitative perspectives.

  14. Coalescence preference in dense packing of bubbles

    NASA Astrophysics Data System (ADS)

    Kim, Yeseul; Gim, Bopil; Gim, Bopil; Weon, Byung Mook

    2015-11-01

    Coalescence preference is the tendency that a merged bubble from the contact of two original bubbles (parent) tends to be near to the bigger parent. Here, we show that the coalescence preference can be blocked by densely packing of neighbor bubbles. We use high-speed high-resolution X-ray microscopy to clearly visualize individual coalescence phenomenon which occurs in micro scale seconds and inside dense packing of microbubbles with a local packing fraction of ~40%. Previous theory and experimental evidence predict a power of -5 between the relative coalescence position and the parent size. However, our new observation for coalescence preference in densely packed microbubbles shows a different power of -2. We believe that this result may be important to understand coalescence dynamics in dense packing of soft matter. This work (NRF-2013R1A22A04008115) was supported by Mid-career Researcher Program through NRF grant funded by the MEST and also was supported by Ministry of Science, ICT and Future Planning (2009-0082580) and by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry and Education, Science and Technology (NRF-2012R1A6A3A04039257).

  15. The Southern California Dense GPS Geodetic Array

    NASA Technical Reports Server (NTRS)

    Webb, F.

    1994-01-01

    The Southern California Earthquake Center is coordinating a effort by scientists at the Jet Propulsion Laboratory, the U.S. Geological Survey, and various academic institutions to establish a dense 250 station, continuously recording GPS geodetic array in southern California for measuring crustal deformation associated with slip on the numerous faults that underlie the major metropolitan areas of southern california.

  16. Preparation of a dense, polycrystalline ceramic structure

    DOEpatents

    Cooley, Jason; Chen, Ching-Fong; Alexander, David

    2010-12-07

    Ceramic nanopowder was sealed inside a metal container under a vacuum. The sealed evacuated container was forced through a severe deformation channel at an elevated temperature below the melting point of the ceramic nanopowder. The result was a dense nanocrystalline ceramic structure inside the metal container.

  17. DENSE NONAQUEOUS PHASE LIQUIDS -- A WORKSHOP SUMMARY

    EPA Science Inventory

    site characterization, and, therefore, DNAPL remediation, can be expected. Dense nonaqueous phase liquids (DNAPLs) in the subsurface are long-term sources of ground-water contamination, and may persist for centuries before dissolving completely in adjacent ground water. In respo...

  18. Efficiently dense hierarchical graphene based aerogel electrode for supercapacitors

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Lu, Chengxing; Peng, Huifen; Zhang, Xin; Wang, Zhenkun; Wang, Gongkai

    2016-08-01

    Boosting gravimetric and volumetric capacitances simultaneously at a high rate is still a discrepancy in development of graphene based supercapacitors. We report the preparation of dense hierarchical graphene/activated carbon composite aerogels via a reduction induced self-assembly process coupled with a drying post treatment. The compact and porous structures of composite aerogels could be maintained. The drying post treatment has significant effects on increasing the packing density of aerogels. The introduced activated carbons play the key roles of spacers and bridges, mitigating the restacking of adjacent graphene nanosheets and connecting lateral and vertical graphene nanosheets, respectively. The optimized aerogel with a packing density of 0.67 g cm-3 could deliver maximum gravimetric and volumetric capacitances of 128.2 F g-1 and 85.9 F cm-3, respectively, at a current density of 1 A g-1 in aqueous electrolyte, showing no apparent degradation to the specific capacitance at a current density of 10 A g-1 after 20000 cycles. The corresponding gravimetric and volumetric capacitances of 116.6 F g-1 and 78.1 cm-3 with an acceptable cyclic stability are also achieved in ionic liquid electrolyte. The results show a feasible strategy of designing dense hierarchical graphene based aerogels for supercapacitors.

  19. Texture-Aware Dense Image Matching Using Ternary Census Transform

    NASA Astrophysics Data System (ADS)

    Hu, Han; Chen, Chongtai; Wu, Bo; Yang, Xiaoxia; Zhu, Qing; Ding, Yulin

    2016-06-01

    Textureless and geometric discontinuities are major problems in state-of-the-art dense image matching methods, as they can cause visually significant noise and the loss of sharp features. Binary census transform is one of the best matching cost methods but in textureless areas, where the intensity values are similar, it suffers from small random noises. Global optimization for disparity computation is inherently sensitive to parameter tuning in complex urban scenes, and must compromise between smoothness and discontinuities. The aim of this study is to provide a method to overcome these issues in dense image matching, by extending the industry proven Semi-Global Matching through 1) developing a ternary census transform, which takes three outputs in a single order comparison and encodes the results in two bits rather than one, and also 2) by using texture-information to self-tune the parameters, which both preserves sharp edges and enforces smoothness when necessary. Experimental results using various datasets from different platforms have shown that the visual qualities of the triangulated point clouds in urban areas can be largely improved by these proposed methods.

  20. Understanding neutron production in the deuterium dense plasma focus

    SciTech Connect

    Appelbe, Brian E-mail: j.chittenden@imperial.ac.uk; Chittenden, Jeremy E-mail: j.chittenden@imperial.ac.uk

    2014-12-15

    The deuterium Dense Plasma Focus (DPF) can produce copious amounts of MeV neutrons and can be used as an efficient neutron source. However, the mechanism by which neutrons are produced within the DPF is poorly understood and this limits our ability to optimize the device. In this paper we present results from a computational study aimed at understanding how neutron production occurs in DPFs with a current between 70 kA and 500 kA and which parameters can affect it. A combination of MHD and kinetic tools are used to model the different stages of the DPF implosion. It is shown that the anode shape can significantly affect the structure of the imploding plasma and that instabilities in the implosion lead to the generation of large electric fields at stagnation. These electric fields can accelerate deuterium ions within the stagnating plasma to large (>100 keV) energies leading to reactions with ions in the cold dense plasma. It is shown that the electromagnetic fields present can significantly affect the trajectories of the accelerated ions and the resulting neutron production.

  1. Revisiting Intrinsic Curves for Efficient Dense Stereo Matching

    NASA Astrophysics Data System (ADS)

    Shahbazi, M.; Sohn, G.; Théau, J.; Ménard, P.

    2016-06-01

    Dense stereo matching is one of the fundamental and active areas of photogrammetry. The increasing image resolution of digital cameras as well as the growing interest in unconventional imaging, e.g. unmanned aerial imagery, has exposed stereo image pairs to serious occlusion, noise and matching ambiguity. This has also resulted in an increase in the range of disparity values that should be considered for matching. Therefore, conventional methods of dense matching need to be revised to achieve higher levels of efficiency and accuracy. In this paper, we present an algorithm that uses the concepts of intrinsic curves to propose sparse disparity hypotheses for each pixel. Then, the hypotheses are propagated to adjoining pixels by label-set enlargement based on the proximity in the space of intrinsic curves. The same concepts are applied to model occlusions explicitly via a regularization term in the energy function. Finally, a global optimization stage is performed using belief-propagation to assign one of the disparity hypotheses to each pixel. By searching only through a small fraction of the whole disparity search space and handling occlusions and ambiguities, the proposed framework could achieve high levels of accuracy and efficiency.

  2. Efficiently dense hierarchical graphene based aerogel electrode for supercapacitors

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Lu, Chengxing; Peng, Huifen; Zhang, Xin; Wang, Zhenkun; Wang, Gongkai

    2016-08-01

    Boosting gravimetric and volumetric capacitances simultaneously at a high rate is still a discrepancy in development of graphene based supercapacitors. We report the preparation of dense hierarchical graphene/activated carbon composite aerogels via a reduction induced self-assembly process coupled with a drying post treatment. The compact and porous structures of composite aerogels could be maintained. The drying post treatment has significant effects on increasing the packing density of aerogels. The introduced activated carbons play the key roles of spacers and bridges, mitigating the restacking of adjacent graphene nanosheets and connecting lateral and vertical graphene nanosheets, respectively. The optimized aerogel with a packing density of 0.67 g cm-3 could deliver maximum gravimetric and volumetric capacitances of 128.2 F g-1 and 85.9 F cm-3, respectively, at a current density of 1 A g-1 in aqueous electrolyte, showing no apparent degradation to the specific capacitance at a current density of 10 A g-1 after 20000 cycles. The corresponding gravimetric and volumetric capacitances of 116.6 F g-1 and 78.1 cm-3 with an acceptable cyclic stability are also achieved in ionic liquid electrolyte. The results show a feasible strategy of designing dense hierarchical graphene based aerogels for supercapacitors.

  3. A novel double patterning approach for 30nm dense holes

    NASA Astrophysics Data System (ADS)

    Hsu, Dennis Shu-Hao; Wang, Walter; Hsieh, Wei-Hsien; Huang, Chun-Yen; Wu, Wen-Bin; Shih, Chiang-Lin; Shih, Steven

    2011-04-01

    Double Patterning Technology (DPT) was commonly accepted as the major workhorse beyond water immersion lithography for sub-38nm half-pitch line patterning before the EUV production. For dense hole patterning, classical DPT employs self-aligned spacer deposition and uses the intersection of horizontal and vertical lines to define the desired hole patterns. However, the increase in manufacturing cost and process complexity is tremendous. Several innovative approaches have been proposed and experimented to address the manufacturing and technical challenges. A novel process of double patterned pillars combined image reverse will be proposed for the realization of low cost dense holes in 30nm node DRAM. The nature of pillar formation lithography provides much better optical contrast compared to the counterpart hole patterning with similar CD requirements. By the utilization of a reliable freezing process, double patterned pillars can be readily implemented. A novel image reverse process at the last stage defines the hole patterns with high fidelity. In this paper, several freezing processes for the construction of the double patterned pillars were tested and compared, and 30nm double patterning pillars were demonstrated successfully. A variety of different image reverse processes will be investigated and discussed for their pros and cons. An economic approach with the optimized lithography performance will be proposed for the application of 30nm DRAM node.

  4. The Distribution of YSO Masses in Dense Hubs and Less Dense Filaments

    NASA Astrophysics Data System (ADS)

    Kirk, Helen; Myers, P.

    2010-01-01

    Dense "hubs" and less dense radiating "filaments" are common features of nearby star-forming regions and infrared dark clouds. Cores and young stars are more concentrated in such hubs than in their radiating filaments. Accreting protostars may gain less mass in such low-density filaments, since low-density gas takes longer to accrete, and since the accretion must draw gas from a greater distance in filamentary geometry. We present an investigation of the mass distributions of YSOs in dense clusters and low-density filaments in the nearest molecular clouds, to test whether YSO masses depend on environment density and geometry. HK is supported by an NSERC PDF.

  5. Monte Carlo simulations of ionization potential depression in dense plasmas

    NASA Astrophysics Data System (ADS)

    Stransky, M.

    2016-01-01

    A particle-particle grand canonical Monte Carlo model with Coulomb pair potential interaction was used to simulate modification of ionization potentials by electrostatic microfields. The Barnes-Hut tree algorithm [J. Barnes and P. Hut, Nature 324, 446 (1986)] was used to speed up calculations of electric potential. Atomic levels were approximated to be independent of the microfields as was assumed in the original paper by Ecker and Kröll [Phys. Fluids 6, 62 (1963)]; however, the available levels were limited by the corresponding mean inter-particle distance. The code was tested on hydrogen and dense aluminum plasmas. The amount of depression was up to 50% higher in the Debye-Hückel regime for hydrogen plasmas, in the high density limit, reasonable agreement was found with the Ecker-Kröll model for hydrogen plasmas and with the Stewart-Pyatt model [J. Stewart and K. Pyatt, Jr., Astrophys. J. 144, 1203 (1966)] for aluminum plasmas. Our 3D code is an improvement over the spherically symmetric simplifications of the Ecker-Kröll and Stewart-Pyatt models and is also not limited to high atomic numbers as is the underlying Thomas-Fermi model used in the Stewart-Pyatt model.

  6. Fully kinetic simulations of megajoule-scale dense plasma focus

    SciTech Connect

    Schmidt, A.; Link, A.; Tang, V.; Halvorson, C.; May, M.; Welch, D.; Meehan, B. T.; Hagen, E. C.

    2014-10-15

    Dense plasma focus (DPF) Z-pinch devices are sources of copious high energy electrons and ions, x-rays, and neutrons. Megajoule-scale DPFs can generate 10{sup 12} neutrons per pulse in deuterium gas through a combination of thermonuclear and beam-target fusion. However, the details of the neutron production are not fully understood and past optimization efforts of these devices have been largely empirical. Previously, we reported on the first fully kinetic simulations of a kilojoule-scale DPF and demonstrated that both kinetic ions and kinetic electrons are needed to reproduce experimentally observed features, such as charged-particle beam formation and anomalous resistivity. Here, we present the first fully kinetic simulation of a MegaJoule DPF, with predicted ion and neutron spectra, neutron anisotropy, neutron spot size, and time history of neutron production. The total yield predicted by the simulation is in agreement with measured values, validating the kinetic model in a second energy regime.

  7. Error coding simulations in C

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1994-01-01

    When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.

  8. Error coding simulations in C

    NASA Astrophysics Data System (ADS)

    Noble, Viveca K.

    1994-10-01

    When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.

  9. Adaptive Source Coding Schemes for Geometrically Distributed Integer Alphabets

    NASA Technical Reports Server (NTRS)

    Cheung, K-M.; Smyth, P.

    1993-01-01

    Revisit the Gallager and van Voorhis optimal source coding scheme for geometrically distributed non-negative integer alphabets and show that the various subcodes in the popular Rice algorithm can be derived from the Gallager and van Voorhis code.

  10. An Empirical Evaluation of Coding Methods for Multi-Symbol Alphabets.

    ERIC Educational Resources Information Center

    Moffat, Alistair; And Others

    1994-01-01

    Evaluates the performance of different methods of data compression coding in several situations. Huffman's code, arithmetic coding, fixed codes, fast approximations to arithmetic coding, and splay coding are discussed in terms of their speed, memory requirements, and proximity to optimal performance. Recommendations for the best methods of…

  11. The kinetic chemistry of dense interstellar clouds

    NASA Technical Reports Server (NTRS)

    Graedel, T. E.; Langer, W. D.; Frerking, M. A.

    1982-01-01

    A model of the time-dependent chemistry of dense interstellar clouds is formulated to study the dominant chemical processes in carbon and oxygen isotope fractionation, the formation of nitrogen-containing molecules, and the evolution of product molecules as a function of cloud density and temperature. The abundances of the dominant isotopes of the carbon- and oxygen-bearing molecules are calculated. The chemical abundances are found to be quite sensitive to electron concentration since the electron concentration determines the ratio of H3(+) to He(+), and the electron density is strongly influenced by the metals abundance. For typical metal abundances and for H2 cloud density not less than 10,000 molecules/cu cm, nearly all carbon exists as CO at late cloud ages. At high cloud density, many aspects of the chemistry are strongly time dependent. Finally, model calculations agree well with abundances deduced from observations of molecular line emission in cold dense clouds.

  12. Hydrodynamic stellar interactions in dense star clusters

    NASA Technical Reports Server (NTRS)

    Rasio, Frederic A.

    1993-01-01

    Highly detailed HST observations of globular-cluster cores and galactic nuclei motivate new theoretical studies of the violent dynamical processes which govern the evolution of these very dense stellar systems. These processes include close stellar encounters and direct physical collisions between stars. Such hydrodynamic stellar interactions are thought to explain the large populations of blue stragglers, millisecond pulsars, X-ray binaries, and other peculiar sources observed in globular clusters. Three-dimensional hydrodynamics techniques now make it possible to perform realistic numerical simulations of these interactions. The results, when combined with those of N-body simulations of stellar dynamics, should provide for the first time a realistic description of dense star clusters. Here I review briefly current theoretical work on hydrodynamic stellar interactions, emphasizing its relevance to recent observations.

  13. Dense hadronic matter in holographic QCD

    NASA Astrophysics Data System (ADS)

    Kim, Keun-Young; Sin, Sang-Jin; Zahed, Ismail

    2013-10-01

    We provide a method to study hadronic matter at finite density in the context of the Sakai-Sugimoto model. We introduce the baryon chemical potential through the external U(1) v gauge field in the induced (DBI plus CS) action on the D8-probe-brane, where baryons are skyrmions. Vector dominance is manifest at finite density. We derive the effect of the baryon density on the energy density, and on the dispersion relations of pions and vector mesons at large N c . The energy density asymptotes are constant at large density, suggesting that dense matter at large N c freezes, with the pion velocity dropping to zero. Holographic dense matter enforces exactly the tenets of vector dominance and efficiently screens vector mesons. At the freezing point, the ρ — ππ coupling vanishes with a finite rho mass of about 20% its vacuum value.

  14. Active fluidization in dense glassy systems.

    PubMed

    Mandal, Rituparno; Bhuyan, Pranab Jyoti; Rao, Madan; Dasgupta, Chandan

    2016-07-20

    Dense soft glasses show strong collective caging behavior at sufficiently low temperatures. Using molecular dynamics simulations of a model glass former, we show that the incorporation of activity or self-propulsion, f0, can induce cage breaking and fluidization, resulting in the disappearance of the glassy phase beyond a critical f0. The diffusion coefficient crosses over from being strongly to weakly temperature dependent as f0 is increased. In addition, we demonstrate that activity induces a crossover from a fragile to a strong glass and a tendency of active particles to cluster. Our results are of direct relevance to the collective dynamics of dense active colloidal glasses and to recent experiments on tagged particle diffusion in living cells. PMID:27380935

  15. Dense matter theory: A simple classical approach

    NASA Astrophysics Data System (ADS)

    Savić, P.; Čelebonović, V.

    1994-07-01

    In the sixties, the first author and by P. Savić and R. Kašanin started developing a mean-field theory of dense matter. It is based on the Coulomb interaction, supplemented by a microscopic selection rule and a set of experimentally founded postulates. Applications of the theory range from the calculation of models of planetary internal structure to DAC experiments.

  16. Oxygen ion-conducting dense ceramic

    DOEpatents

    Balachandran, Uthamalingam; Kleefisch, Mark S.; Kobylinski, Thaddeus P.; Morissette, Sherry L.; Pei, Shiyou

    1998-01-01

    Preparation, structure, and properties of mixed metal oxide compositions and their uses are described. Mixed metal oxide compositions of the invention have stratified crystalline structure identifiable by means of powder X-ray diffraction patterns. In the form of dense ceramic membranes, the present compositions demonstrate an ability to separate oxygen selectively from a gaseous mixture containing oxygen and one or more other volatile components by means of ionic conductivities.

  17. Shear dispersion in dense granular flows

    DOE PAGESBeta

    Christov, Ivan C.; Stone, Howard A.

    2014-04-18

    We formulate and solve a model problem of dispersion of dense granular materials in rapid shear flow down an incline. The effective dispersivity of the depth-averaged concentration of the dispersing powder is shown to vary as the Péclet number squared, as in classical Taylor–Aris dispersion of molecular solutes. An extension to generic shear profiles is presented, and possible applications to industrial and geological granular flows are noted.

  18. Dense Molecular Gas in Centaurus A

    NASA Astrophysics Data System (ADS)

    Wild, Wolfgang; Eckart, Andreas

    1999-10-01

    Centaurus A (NGC 5128) is the closest radio galaxy, and its molecular interstellar medium has been studied extensively in recent years. However, these studies used mostly molecular lines tracing low to medium density gas (see e.g. Eckart et al. 1990. Wild et al. 1997). The amount and distribution of the dense component remained largely unknown. We present spectra of the HCN(1-0) emission - which traces dense (n(H2) > 104 cm-3) molecular gas - at the center and along the prominent dust lane at offset positions +/- 60" and +/- 100", as well as single CS(2-1) and CS(3-2) spectra, observed with the SEST on La Silla, Chile. At the central position, the integrated intensity ratio I(HCN)/I(CO) peaks at 0.064, and decreases to somewhat equal to 0.02 to 0.04 in the dust lane. Based on the line luminosity ratio L(HCN)/L(CO) we estimate that there is a significant amount of dense gas in Centaurus A. The fraction of dense molecular gas as well as the star formation efficiency LFIR/LCO towards the center of Cen A is comparable to ultra-luminous infrared galaxies, and falls in between the values for ULIRGs and normal galaxies for positions in the dust lane. Details will be published in Wild & Eckart (A&A, in prep.). Eckart et al. 1990, ApJ 363, 451 Rydbeck et al. 1993, Astr.Ap. (Letters) 270, L13. Wild, W., Eckart, A. & Wiklind, T. 1997, Astr.Ap. 322, 419.

  19. Structures for dense, crack free thin films

    DOEpatents

    Jacobson, Craig P.; Visco, Steven J.; De Jonghe, Lutgard C.

    2011-03-08

    The process described herein provides a simple and cost effective method for making crack free, high density thin ceramic film. The steps involve depositing a layer of a ceramic material on a porous or dense substrate. The deposited layer is compacted and then the resultant laminate is sintered to achieve a higher density than would have been possible without the pre-firing compaction step.

  20. Rapid Optimization Library

    SciTech Connect

    Denis Rldzal, Drew Kouri

    2014-05-13

    ROL provides interfaces to and implementations of algorithms for gradient-based unconstrained and constrained optimization. ROL can be used to optimize the response of any client simulation code that evaluates scalar-valued response functions. If the client code can provide gradient information for the response function, ROL will take advantage of it, resulting in faster runtimes. ROL's interfaces are matrix-free, in other words ROL only uses evaluations of scalar-valued and vector-valued functions. ROL can be used to solve optimal design problems and inverse problems based on a variety of simulation software.

  1. Rapid Optimization Library

    Energy Science and Technology Software Center (ESTSC)

    2014-05-13

    ROL provides interfaces to and implementations of algorithms for gradient-based unconstrained and constrained optimization. ROL can be used to optimize the response of any client simulation code that evaluates scalar-valued response functions. If the client code can provide gradient information for the response function, ROL will take advantage of it, resulting in faster runtimes. ROL's interfaces are matrix-free, in other words ROL only uses evaluations of scalar-valued and vector-valued functions. ROL can be used tomore » solve optimal design problems and inverse problems based on a variety of simulation software.« less

  2. Multishock Compression Properties of Warm Dense Argon.

    PubMed

    Zheng, Jun; Chen, Qifeng; Yunjun, Gu; Li, Zhiguo; Shen, Zhijun

    2015-01-01

    Warm dense argon was generated by a shock reverberation technique. The diagnostics of warm dense argon were performed by a multichannel optical pyrometer and a velocity interferometer system. The equations of state in the pressure-density range of 20-150 GPa and 1.9-5.3 g/cm(3) from the first- to fourth-shock compression were presented. The single-shock temperatures in the range of 17.2-23.4 kK were obtained from the spectral radiance. Experimental results indicates that multiple shock-compression ratio (ηi = ρi/ρ0) is greatly enhanced from 3.3 to 8.8, where ρ0 is the initial density of argon and ρi (i = 1, 2, 3, 4) is the compressed density from first to fourth shock, respectively. For the relative compression ratio (ηi' = ρi/ρi-1), an interesting finding is that a turning point occurs at the second shocked states under the conditions of different experiments, and ηi' increases with pressure in lower density regime and reversely decreases with pressure in higher density regime. The evolution of the compression ratio is controlled by the excitation of internal degrees of freedom, which increase the compression, and by the interaction effects between particles that reduce it. A temperature-density plot shows that current multishock compression states of argon have distributed into warm dense regime. PMID:26515505

  3. Dense spray evaporation as a mixing process

    NASA Astrophysics Data System (ADS)

    de Rivas, A.; Villermaux, E.

    2016-05-01

    We explore the processes by which a dense set of small liquid droplets (a spray) evaporates in a dry, stirred gas phase. A dense spray of micron-sized liquid (water or ethanol) droplets is formed in air by a pneumatic atomizer in a closed chamber. The spray is conveyed in ambient air as a plume whose extension depends on the relative humidity of the diluting medium. Standard shear instabilities develop at the plume edge, forming the stretched lamellar structures familiar with passive scalars. Unlike passive scalars however, these lamellae vanish in a finite time, because individual droplets evaporate at their border in contact with the dry environment. Experiments demonstrate that the lifetime of an individual droplet embedded in a lamellae is much larger than expected from the usual d2 law describing the fate of a single drop evaporating in a quiescent environment. By analogy with the way mixing times are understood from the convection-diffusion equation for passive scalars, we show that the lifetime of a spray lamellae stretched at a constant rate γ is tv=1/γ ln(1/+ϕ ϕ ) , where ϕ is a parameter that incorporates the thermodynamic and diffusional properties of the vapor in the diluting phase. The case of time-dependent stretching rates is examined too. A dense spray behaves almost as a (nonconserved) passive scalar.

  4. Numerical modeling for dilute and dense sprays

    NASA Technical Reports Server (NTRS)

    Chen, C. P.; Kim, Y. M.; Shang, H. M.; Ziebarth, J. P.; Wang, T. S.

    1992-01-01

    We have successfully implemented a numerical model for spray-combustion calculations. In this model, the governing gas-phase equations in Eulerian coordinate are solved by a time-marching multiple pressure correction procedure based on the operator-splitting technique. The droplet-phase equations in Lagrangian coordinate are solved by a stochastic discrete particle technique. In order to simplify the calculation procedure for the circulating droplets, the effective conductivity model is utilized. The k-epsilon models are utilized to characterize the time and length scales of the gas phase in conjunction with turbulent modulation by droplets and droplet dispersion by turbulence. This method entails random sampling of instantaneous gas flow properties and the stochastic process requires a large number of computational parcels to produce the satisfactory dispersion distributions even for rather dilute sprays. Two major improvements in spray combustion modelings were made. Firstly, we have developed a probability density function approach in multidimensional space to represent a specific computational particle. Secondly, we incorporate the Taylor Analogy Breakup (TAB) model for handling the dense spray effects. This breakup model is based on the reasonable assumption that atomization and drop breakup are indistinguishable processes within a dense spray near the nozzle exit. Accordingly, atomization is prescribed by injecting drops which have a characteristic size equal to the nozzle exit diameter. Example problems include the nearly homogeneous and inhomogeneous turbulent particle dispersion, and the non-evaporating, evaporating, and burning dense sprays. Comparison with experimental data will be discussed in detail.

  5. Hybrid-Based Dense Stereo Matching

    NASA Astrophysics Data System (ADS)

    Chuang, T. Y.; Ting, H. W.; Jaw, J. J.

    2016-06-01

    Stereo matching generating accurate and dense disparity maps is an indispensable technique for 3D exploitation of imagery in the fields of Computer vision and Photogrammetry. Although numerous solutions and advances have been proposed in the literature, occlusions, disparity discontinuities, sparse texture, image distortion, and illumination changes still lead to problematic issues and await better treatment. In this paper, a hybrid-based method based on semi-global matching is presented to tackle the challenges on dense stereo matching. To ease the sensitiveness of SGM cost aggregation towards penalty parameters, a formal way to provide proper penalty estimates is proposed. To this end, the study manipulates a shape-adaptive cross-based matching with an edge constraint to generate an initial disparity map for penalty estimation. Image edges, indicating the potential locations of occlusions as well as disparity discontinuities, are approved by the edge drawing algorithm to ensure the local support regions not to cover significant disparity changes. Besides, an additional penalty parameter 𝑃𝑒 is imposed onto the energy function of SGM cost aggregation to specifically handle edge pixels. Furthermore, the final disparities of edge pixels are found by weighting both values derived from the SGM cost aggregation and the U-SURF matching, providing more reliable estimates at disparity discontinuity areas. Evaluations on Middlebury stereo benchmarks demonstrate satisfactory performance and reveal the potency of the hybrid-based dense stereo matching method.

  6. Automated building extraction using dense elevation matrices

    NASA Astrophysics Data System (ADS)

    Bendett, A. A.; Rauhala, Urho A.; Pearson, James J.

    1997-02-01

    The identification and measurement of buildings in imagery is important to a number of applications including cartography, modeling and simulation, and weapon targeting. Extracting large numbers of buildings manually can be time- consuming and expensive, so the automation of the process is highly desirable. This paper describes and demonstrates such an automated process for extracting rectilinear buildings from stereo imagery. The first step is the generation of a dense elevation matrix registered to the imagery. In the examples shown, this was accomplished using global minimum residual matching (GMRM). GMRM automatically removes y- parallax from the stereo imagery and produces a dense matrix of x-parallax values which are proportional to the local elevation, and, of course, registered to the imagery. The second step is to form a joint probability distribution of the image gray levels and the corresponding height values from the elevation matrix. Based on the peaks of that distribution, the area of interest is segmented into feature and non-feature areas. The feature areas are further refined using length, width and height constraints to yield promising building hypotheses with their corresponding vertices. The gray shade image is used in the third step to verify the hypotheses and to determine precise edge locations corresponding to the approximate vertices and satisfying appropriate orthogonality constraints. Examples of successful application of this process to imagery are presented, and extensions involving the use of dense elevation matrices from other sources are possible.

  7. Dense Correspondences across Scenes and Scales.

    PubMed

    Tau, Moria; Hassner, Tal

    2016-05-01

    We seek a practical method for establishing dense correspondences between two images with similar content, but possibly different 3D scenes. One of the challenges in designing such a system is the local scale differences of objects appearing in the two images. Previous methods often considered only few image pixels; matching only pixels for which stable scales may be reliably estimated. Recently, others have considered dense correspondences, but with substantial costs associated with generating, storing and matching scale invariant descriptors. Our work is motivated by the observation that pixels in the image have contexts-the pixels around them-which may be exploited in order to reliably estimate local scales. We make the following contributions. (i) We show that scales estimated in sparse interest points may be propagated to neighboring pixels where this information cannot be reliably determined. Doing so allows scale invariant descriptors to be extracted anywhere in the image. (ii) We explore three means for propagating this information: using the scales at detected interest points, using the underlying image information to guide scale propagation in each image separately, and using both images together. Finally, (iii), we provide extensive qualitative and quantitative results, demonstrating that scale propagation allows for accurate dense correspondences to be obtained even between very different images, with little computational costs beyond those required by existing methods. PMID:26336115

  8. MACRAD: A mass analysis code for radiators

    SciTech Connect

    Gallup, D.R.

    1988-01-01

    A computer code to estimate and optimize the mass of heat pipe radiators (MACRAD) is currently under development. A parametric approach is used in MACRAD, which allows the user to optimize radiator mass based on heat pipe length, length to diameter ratio, vapor to wick radius, radiator redundancy, etc. Full consideration of the heat pipe operating parameters, material properties, and shielding requirements is included in the code. Preliminary results obtained with MACRAD are discussed.

  9. Codes with special correlation.

    NASA Technical Reports Server (NTRS)

    Baumert, L. D.

    1964-01-01

    Uniform binary codes with special correlation including transorthogonality and simplex code, Hadamard matrices and difference sets uniform binary codes with special correlation including transorthogonality and simplex code, Hadamard matrices and difference sets

  10. Applications of Coding in Network Communications

    ERIC Educational Resources Information Center

    Chang, Christopher SungWook

    2012-01-01

    This thesis uses the tool of network coding to investigate fast peer-to-peer file distribution, anonymous communication, robust network construction under uncertainty, and prioritized transmission. In a peer-to-peer file distribution system, we use a linear optimization approach to show that the network coding framework significantly simplifies…

  11. Error-correction coding

    NASA Technical Reports Server (NTRS)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  12. Modelling the spinning dust emission from dense interstellar clouds

    NASA Astrophysics Data System (ADS)

    Ysard, N.; Juvela, M.; Verstraete, L.

    2011-11-01

    Context. Electric dipole emission arising from rapidly rotating polycyclic aromatic hydrocarbons (PAHs) is often invoked to explain the anomalous microwave emission. This assignation is based on i) an observed tight correlation between the mid-IR emission of PAHs and the anomalous microwave emission; and ii) a good agreement between models of spinning dust and the broadband anomalous microwave emission spectrum. So far often detected at large scale in the diffuse interstellar medium, the anomalous microwave emission has recently been studied in detail in well-known dense molecular clouds with the help of Planck data. Aims: While much attention has been given to the physics of spinning dust emission, the impact of varying local physical conditions has not yet been considered in detail. Our aim is to study the emerging spinning dust emission from interstellar clouds with realistic physical conditions and radiative transfer. Methods: We use the DustEM code to describe the extinction and IR emission of all dust populations. The spinning dust emission is obtained with SpDust, which we have coupled to DustEM. We carry out full radiative transfer simulations and carefully estimate the local gas state as a function of position within interstellar clouds. Results: We show that the spinning dust emission is sensitive to the abundances of the major ions (H ii, C ii) and we propose a simple scheme to estimate these abundances. We also investigate the effect of changing the cosmic-ray rate. In dense media, where radiative transfer is mandatory to estimate the temperature of the grains, we show that the relationship between the spinning and mid-IR emissivities of PAHs is no longer linear and that the spinning dust emission may actually be strong at the centre of clouds where the mid-IR PAH emission is weak. These results provide new ways to trace grain growth from diffuse to dense medium and will be useful for the analysis of anomalous microwave emission at the scale of

  13. Three body dynamics in dense gravitational systems

    NASA Astrophysics Data System (ADS)

    Moody, Kenneth

    galactic black hole binaries as a background source. I also found that the binaries are ejected from the cluster with, for the most part, a velocity just above the escape speed of the cluster which is a few tens of km/sec. These gravitational wave sources are thus constrained in their host galaxies as the galactic escape velocity is some hundreds of km/sec which only a very few binaries achieve in special cases. I studied the effect of the Kozai mechanism on two pulsars, one in the globular cluster M4, and the other J1903+0327. The M4 pulsar pulsar was found to have an unusually large orbital eccentricity, given that it is in a binary with a period of nearly 200 days. This unusual behavior led to the conclusion that a planet-like third body of much less than a solar mass was orbiting the binary. I used my own code to integrate the secular evolution equations with a broad set of initial conditions to determine the first detailed properties of the third body; namely that the mass of the planet is about that of Jupiter. The second pulsar J1903+0327 consists of a 2.15ms pulsar and a near solar mass companion in an e = 0.44 orbit. A preliminary study of this pulsar showed that the high eccentricity can be reproduced by my models, and there are three candidate clusters from which this pulsar could have originated. My third project was a study of the effect of a planet at 50 AU on the inner solar system. The origin of this planet is assumed to be from an exchange with another solar system in the early stages of the sun's life while it was still in the dense star forming region where it was born. Similar studies have been done with the exchange of stars among binaries by Malmberg et al. (2007b). The exchange once again allows the Kozai effect to bring about drastic change in the inner system. A planet is chosen as the outer object as, unlike a stellar companion, it would remain unseen by current radial velocity and direct observation methods, although it could be detected by

  14. Two Perspectives on the Origin of the Standard Genetic Code

    NASA Astrophysics Data System (ADS)

    Sengupta, Supratim; Aggarwal, Neha; Bandhu, Ashutosh Vishwa

    2014-12-01

    The origin of a genetic code made it possible to create ordered sequences of amino acids. In this article we provide two perspectives on code origin by carrying out simulations of code-sequence coevolution in finite populations with the aim of examining how the standard genetic code may have evolved from more primitive code(s) encoding a small number of amino acids. We determine the efficacy of the physico-chemical hypothesis of code origin in the absence and presence of horizontal gene transfer (HGT) by allowing a diverse collection of code-sequence sets to compete with each other. We find that in the absence of horizontal gene transfer, natural selection between competing codes distinguished by differences in the degree of physico-chemical optimization is unable to explain the structure of the standard genetic code. However, for certain probabilities of the horizontal transfer events, a universal code emerges having a structure that is consistent with the standard genetic code.

  15. DISPERSION OF DENSE GAS RELEASES IN A WIND TUNNEL

    EPA Science Inventory

    The paper documents two dense gas projects undertaken at the US EPA Fluid Modeling Facility. The study investigated the basic nature of the transport and dispersion of a dense gas plume in a simulated neutral atmospheric boundary layer. The two dense gas releases were CO2 and SF6...

  16. 3D polygonal representation of dense point clouds by triangulation, segmentation, and texture projection

    NASA Astrophysics Data System (ADS)

    Tajbakhsh, Touraj

    2010-02-01

    A basic concern of computer graphic is the modeling and realistic representation of three-dimensional objects. In this paper we present our reconstruction framework which determines a polygonal surface from a set of dense points such those typically obtained from laser scanners. We deploy the concept of adaptive blobs to achieve a first volumetric representation of the object. In the next step we estimate a coarse surface using the marching cubes method. We propose to deploy a depth-first search segmentation algorithm traversing a graph representation of the obtained polygonal mesh in order to identify all connected components. A so called supervised triangulation maps the coarse surfaces onto the dense point cloud. We optimize the mesh topology using edge exchange operations. For photo-realistic visualization of objects we finally synthesize optimal low-loss textures from available scene captures of different projections. We evaluate our framework on artificial data as well as real sensed data.

  17. Backward Raman compression of x-rays in metals and warm dense matters

    SciTech Connect

    Son, S.; Ku, S.; Moon, Sung Joon

    2010-11-15

    Experimentally observed decay rate of the long wavelength Langmuir wave in metals and dense plasmas is orders of magnitude larger than the prediction of the prevalent Landau damping theory. The discrepancy is explored, and the existence of a regime where the forward Raman scattering is stable and the backward Raman scattering is unstable is examined. The amplification of a x-ray pulse in this regime, via the backward Raman compression, is computationally demonstrated, and the optimal pulse duration and intensity is estimated.

  18. Quantum molecular dynamics simulations of transport properties in liquid and dense-plasma plutonium

    SciTech Connect

    Kress, J. D.; Cohen, James S.; Kilcrease, D. P.; Horner, D. A.; Collins, L. A.

    2011-02-15

    We have calculated the viscosity and self-diffusion coefficients of plutonium in the liquid phase using quantum molecular dynamics (QMD) and in the dense-plasma phase using orbital-free molecular dynamics (OFMD), as well as in the intermediate warm dense matter regime with both methods. Our liquid metal results for viscosity are about 40% lower than measured experimentally, whereas a previous calculation using an empirical interatomic potential (modified embedded-atom method) obtained results 3-4 times larger than the experiment. The QMD and OFMD results agree well at the intermediate temperatures. The calculations in the dense-plasma regime for temperatures from 50 to 5000 eV and densities about 1-5 times ambient are compared with the one-component plasma (OCP) model, using effective charges given by the average-atom code inferno. The inferno-OCP model results agree with the OFMD to within about a factor of 2, except for the viscosity at temperatures less than about 100 eV, where the disagreement is greater. A Stokes-Einstein relationship of the viscosities and diffusion coefficients is found to hold fairly well separately in both the liquid and dense-plasma regimes.

  19. Impact-activated solidification of dense suspensions

    NASA Astrophysics Data System (ADS)

    Waitukaitis, Scott

    2013-03-01

    Shear-thickening, non-Newtonian fluids have typically been investigated under steady-state conditions. This approach has produced two pictures for suspension response to imposed forcing. In the weak shear-thickening picture, the response is typically attributed to the hydrodynamic interactions giving rise to hydroclusters, small groups of particles interacting through lubrication forces. At the other end of the spectrum, in the discontinuous shear-thickening regime, the response can be seen as a system-wide jamming that is ultimately limited in strength by the system boundaries. While these steady-state pictures have proven extremely useful, some of the most interesting phenomena associated with dense suspensions is transient and local in character. A prototypical example is the extraordinarily large impact resistance of dense suspensions such as cornstarch and water. When poked lightly these materials respond like a fluid, but when punched or kicked they seem to temporarily ``solidify'' and provide enormous resistance to the motion of the impacting object. Using an array of experimental techniques, including high-speed video, embedded force and acceleration sensing, and x-ray imaging, we are able to investigate the dynamic details this process as it unfolds. We find that an impacting object drives the rapid growth of a jammed, solid-like region directly below the impact site. Being coupled to the surrounding fluid by grain-mediated lubrication forces, this creates substantial peripheral flow and ultimately leads to the sudden extraction of the impactor's momentum. With a simple jamming picture to describe the solidification and an added mass model to explain the force on the rod, we are able to predict the forces on the impactor quantitatively. These findings highlight the importance of the non-equilibrium character of dense suspensions near jamming and might serve as a bridge between the weak and discontinuous shear-thickening pictures.

  20. Grain Growth and Silicates in Dense Clouds

    NASA Technical Reports Server (NTRS)

    Pendeleton, Yvonne J.; Chiar, J. E.; Ennico, K.; Boogert, A.; Greene, T.; Knez, C.; Lada, C.; Roellig, T.; Tielens, A.; Werner, M.; Whittet, D.

    2006-01-01

    Interstellar silicates are likely to be a part of all grains responsible for visual extinction (Av) in the diffuse interstellar medium (ISM) and dense clouds. A correlation between Av and the depth of the 9.7 micron silicate feature (measured as optical depth, tau(9.7)) is expected if the dust species are well 'mixed. In the di&se ISM, such a correlation is observed for lines of sight in the solar neighborhood. A previous study of the silicate absorption feature in the Taurus dark cloud showed a tendency for the correlation to break down at high Av (Whittet et al. 1988, MNRAS, 233,321), but the scatter was large. We have acquired Spitzer Infrared Spectrograph data of several lines of sight in the IC 5 146, Barnard 68, Chameleon I and Serpens dense clouds. Our data set spans an Av range between 2 and 35 magnitudes. All lines of sight show the 9.7 micron silicate feature. The Serpens data appear to follow the diffuse ISM correlation line whereas the data for the other clouds show a non-linear correlation between the depth of the silicate feature relative to Av, much like the trend observed in the Taurus data. In fact, it appears that for visual extinctions greater than about 10 mag, tau(9.7) begins to level off. This decrease in the growth of the depth of the 9.7 micron feature with increasing Av could indicate the effects of grain growth in dense clouds. In this poster, we explore the possibility that grain growth causes an increase in opacity (Av) without causing a corresponding increase in tau(9.7).

  1. Multishock Compression Properties of Warm Dense Argon

    PubMed Central

    Zheng, Jun; Chen, Qifeng; Yunjun, Gu; Li, Zhiguo; Shen, Zhijun

    2015-01-01

    Warm dense argon was generated by a shock reverberation technique. The diagnostics of warm dense argon were performed by a multichannel optical pyrometer and a velocity interferometer system. The equations of state in the pressure-density range of 20–150 GPa and 1.9–5.3 g/cm3 from the first- to fourth-shock compression were presented. The single-shock temperatures in the range of 17.2–23.4 kK were obtained from the spectral radiance. Experimental results indicates that multiple shock-compression ratio (ηi = ρi/ρ0) is greatly enhanced from 3.3 to 8.8, where ρ0 is the initial density of argon and ρi (i = 1, 2, 3, 4) is the compressed density from first to fourth shock, respectively. For the relative compression ratio (ηi’ = ρi/ρi-1), an interesting finding is that a turning point occurs at the second shocked states under the conditions of different experiments, and ηi’ increases with pressure in lower density regime and reversely decreases with pressure in higher density regime. The evolution of the compression ratio is controlled by the excitation of internal degrees of freedom, which increase the compression, and by the interaction effects between particles that reduce it. A temperature-density plot shows that current multishock compression states of argon have distributed into warm dense regime. PMID:26515505

  2. Dense Subgraph Partition of Positive Hypergraphs.

    PubMed

    Liu, Hairong; Latecki, Longin Jan; Yan, Shuicheng

    2015-03-01

    In this paper, we present a novel partition framework, called dense subgraph partition (DSP), to automatically, precisely and efficiently decompose a positive hypergraph into dense subgraphs. A positive hypergraph is a graph or hypergraph whose edges, except self-loops, have positive weights. We first define the concepts of core subgraph, conditional core subgraph, and disjoint partition of a conditional core subgraph, then define DSP based on them. The result of DSP is an ordered list of dense subgraphs with decreasing densities, which uncovers all underlying clusters, as well as outliers. A divide-and-conquer algorithm, called min-partition evolution, is proposed to efficiently compute the partition. DSP has many appealing properties. First, it is a nonparametric partition and it reveals all meaningful clusters in a bottom-up way. Second, it has an exact and efficient solution, called min-partition evolution algorithm. The min-partition evolution algorithm is a divide-and-conquer algorithm, thus time-efficient and memory-friendly, and suitable for parallel processing. Third, it is a unified partition framework for a broad range of graphs and hypergraphs. We also establish its relationship with the densest k-subgraph problem (DkS), an NP-hard but fundamental problem in graph theory, and prove that DSP gives precise solutions to DkS for all kin a graph-dependent set, called critical k-set. To our best knowledge, this is a strong result which has not been reported before. Moreover, as our experimental results show, for sparse graphs, especially web graphs, the size of critical k-set is close to the number of vertices in the graph. We test the proposed partition framework on various tasks, and the experimental results clearly illustrate its advantages. PMID:26353260

  3. Dense optical-electrical interface module

    SciTech Connect

    Paul Chang

    2000-12-21

    The DOIM (Dense Optical-electrical Interface Modules) is a custom-designed optical data transmission module employed in the upgrade of Silicon Vertex Detector of CDF experiment at Fermilab. Each DOIM module consists of a transmitter (TX) converting electrical differential input signals to optical outputs, a middle segment of jacketed fiber ribbon cable, and a receiver (RX) which senses the light inputs and converts them back to electrical signals. The targeted operational frequency is 53 MHz, and higher rate is achievable. This article outlines the design goals, implementation methods, production test results, and radiation hardness tests of these modules.

  4. Phase boundary of hot dense fluid hydrogen

    PubMed Central

    Ohta, Kenji; Ichimaru, Kota; Einaga, Mari; Kawaguchi, Sho; Shimizu, Katsuya; Matsuoka, Takahiro; Hirao, Naohisa; Ohishi, Yasuo

    2015-01-01

    We investigated the phase transformation of hot dense fluid hydrogen using static high-pressure laser-heating experiments in a laser-heated diamond anvil cell. The results show anomalies in the heating efficiency that are likely to be attributed to the phase transition from a diatomic to monoatomic fluid hydrogen (plasma phase transition) in the pressure range between 82 and 106 GPa. This study imposes tighter constraints on the location of the hydrogen plasma phase transition boundary and suggests higher critical point than that predicted by the theoretical calculations. PMID:26548442

  5. Electrical and thermal conductivities in dense plasmas

    SciTech Connect

    Faussurier, G. Blancard, C.; Combis, P.; Videau, L.

    2014-09-15

    Expressions for the electrical and thermal conductivities in dense plasmas are derived combining the Chester-Thellung-Kubo-Greenwood approach and the Kramers approximation. The infrared divergence is removed assuming a Drude-like behaviour. An analytical expression is obtained for the Lorenz number that interpolates between the cold solid-state and the hot plasma phases. An expression for the electrical resistivity is proposed using the Ziman-Evans formula, from which the thermal conductivity can be deduced using the analytical expression for the Lorenz number. The present method can be used to estimate electrical and thermal conductivities of mixtures. Comparisons with experiment and quantum molecular dynamics simulations are done.

  6. Molecular dynamics simulations of dense plasmas

    SciTech Connect

    Collins, L.A.; Kress, J.D.; Kwon, I.; Lynch, D.L.; Troullier, N.

    1993-12-31

    We have performed quantum molecular dynamics simulations of hot, dense plasmas of hydrogen over a range of temperatures(0.1-5eV) and densities(0.0625-5g/cc). We determine the forces quantum mechanically from density functional, extended Huckel, and tight binding techniques and move the nuclei according to the classical equations of motion. We determine pair-correlation functions, diffusion coefficients, and electrical conductivities. We find that many-body effects predominate in this regime. We begin to obtain agreement with the OCP and Thomas-Fermi models only at the higher temperatures and densities.

  7. Gravity-driven dense granular flows

    SciTech Connect

    ERTAS,DENIZ; GREST,GARY S.; HALSEY,THOMAS C.; DEVINE,DOV; SILBERT,LEONARDO E.

    2000-03-29

    The authors report and analyze the results of numerical studies of dense granular flows in two and three dimensions, using both linear damped springs and Hertzian force laws between particles. Chute flow generically produces a constant density profile that satisfies scaling relations suggestive of a Bagnold grain inertia regime. The type for force law has little impact on the behavior of the system. Failure is not initiated at the surface, consistent with the absence of surface flows and different principal stress directions at vs. below the surface.

  8. On Coding Non-Contiguous Letter Combinations

    PubMed Central

    Dandurand, Frédéric; Grainger, Jonathan; Duñabeitia, Jon Andoni; Granier, Jean-Pierre

    2011-01-01

    Starting from the hypothesis that printed word identification initially involves the parallel mapping of visual features onto location-specific letter identities, we analyze the type of information that would be involved in optimally mapping this location-specific orthographic code onto a location-invariant lexical code. We assume that some intermediate level of coding exists between individual letters and whole words, and that this involves the representation of letter combinations. We then investigate the nature of this intermediate level of coding given the constraints of optimality. This intermediate level of coding is expected to compress data while retaining as much information as possible about word identity. Information conveyed by letters is a function of how much they constrain word identity and how visible they are. Optimization of this coding is a combination of minimizing resources (using the most compact representations) and maximizing information. We show that in a large proportion of cases, non-contiguous letter sequences contain more information than contiguous sequences, while at the same time requiring less precise coding. Moreover, we found that the best predictor of human performance in orthographic priming experiments was within-word ranking of conditional probabilities, rather than average conditional probabilities. We conclude that from an optimality perspective, readers learn to select certain contiguous and non-contiguous letter combinations as information that provides the best cue to word identity. PMID:21734901

  9. Efficient Online Aggregates in Dense-Region-Based Data Cube Representations

    NASA Astrophysics Data System (ADS)

    Haddadin, Kais; Lauer, Tobias

    In-memory OLAP systems require a space-efficient representation of sparse data cubes in order to accommodate large data sets. On the other hand, most efficient online aggregation techniques, such as prefix sums, are built on dense array-based representations. These are often not applicable to real-world data due to the size of the arrays which usually cannot be compressed well, as most sparsity is removed during pre-processing. A possible solution is to identify dense regions in a sparse cube and only represent those using arrays, while storing sparse data separately, e.g. in a spatial index structure. Previous dense-region-based approaches have concentrated mainly on the effectiveness of the dense-region detection (i.e. on the space-efficiency of the result). However, especially in higher-dimensional cubes, data is usually more cluttered, resulting in a potentially large number of small dense regions, which negatively affects query performance on such a structure. In this paper, our focus is not only on space-efficiency but also on time-efficiency, both for the initial dense-region extraction and for queries carried out in the resulting hybrid data structure. We describe two methods to trade available memory for increased aggregate query performance. In addition, optimizations in our approach significantly reduce the time to build the initial data structure compared to former systems. Also, we present a straightforward adaptation of our approach to support multi-core or multi-processor architectures, which can further enhance query performance. Experiments with different real-world data sets show how various parameter settings can be used to adjust the efficiency and effectiveness of our algorithms.

  10. Multichannel Coding of Applause Signals

    NASA Astrophysics Data System (ADS)

    Hotho, Gerard; van de Par, Steven; Breebaart, Jeroen

    2007-12-01

    We develop a parametric multichannel audio codec dedicated to coding signals consisting of a dense series of transient-type events. These signals of which applause is a typical example are known to be problematic for such audio codecs. The codec design is based on preservation of both timbre and transient-type event density. It combines a very low complexity and a low parameter bit rate (0.2 kbps). In a formal listening test, we compared the proposed codec to the recently standardised MPEG Surround multichannel codec, with an associated parameter bit rate of 9 kbps. We found the new codec to have a significantly higher audio quality than the MPEG Surround codec for the two multichannel applause signals under test. Though this seems promising, the technique presented is not fully mature, for example, because issues related to integration of the proposed codec in the MPEG Surround codec were not addressed.

  11. Nonlinear nanostructures in dense quantum plasmas

    SciTech Connect

    Shukla, P. K.; Eliasson, B.

    2009-10-08

    Dense quantum plasmas are ubiquitous in compact astrophysical objects (e.g. the interior of white dwarf stars, in magnetars, etc.), in semiconductors and micro-mechanical systems, as well as in the next generation intense laser-solid density plasma interaction experiments. In contrast to classical plasmas, one encounters extremely high plasma density and low temperature in dense quantum plasmas. In the latter, the electrons and positrons obey the Fermi-Dirac statistics, and there are new forces associated with i) quantum statistical electron and positron pressures, ii) electron and positron tunneling through the Bohm potential, and iii) electron and positron spin-1/2. Inclusion of these quantum forces gives rise to very high-frequency plasma waves (e.g. in the x-ray regime) at nanoscales. Our objective here is to present nonlinear equations that depict the localization of electron plasma waves in the form of a quantum electron hole and quantum vortex, as well as the trapping of intense electromagnetic waves into a quantum electron hole. Our simulation results reveal that these nonlinear nanostructures are quite robust. Hence, they can be explored for the purpose of transferring localized electrostatic and electromagnetic energies over nanoscales.

  12. Super-resolution without dense flow.

    PubMed

    Su, Heng; Wu, Ying; Zhou, Jie

    2012-04-01

    Super-resolution is a widely applied technique that improves the resolution of input images by software methods. Most conventional reconstruction-based super-resolution algorithms assume accurate dense optical flow fields between the input frames, and their performance degrades rapidly when the motion estimation result is not accurate enough. However, optical flow estimation is usually difficult, particularly when complicated motion is presented in real-world videos. In this paper, we explore a new way to solve this problem by using sparse feature point correspondences between the input images. The feature point correspondences, which are obtained by matching a set of feature points, are usually precise and much more robust than dense optical flow fields. This is because the feature points represent well-selected significant locations in the image, and performing matching on the feature point set is usually very accurate. In order to utilize the sparse correspondences in conventional super-resolution, we extract an adaptive support region with a reliable local flow field from each corresponding feature point pair. The normalized prior is also proposed to increase the visual consistency of the reconstructed result. Extensive experiments on real data were carried out, and results show that the proposed algorithm produces high-resolution images with better quality, particularly in the presence of large-scale or complicated motion fields. PMID:22027381

  13. Dense circumnuclear molecular gas in starburst galaxies

    NASA Astrophysics Data System (ADS)

    Green, C.-E.; Cunningham, M. R.; Green, J. A.; Dawson, J. R.; Jones, P. A.; López-Sánchez, Á. R.; Verdes-Montenegro, L.; Henkel, C.; Baan, W. A.; Martín, S.

    2016-04-01

    We present results from a study of the dense circumnuclear molecular gas of starburst galaxies. The study aims to investigate the interplay between starbursts, active galactic nuclei and molecular gas. We characterize the dense gas traced by HCN, HCO+ and HNC and examine its kinematics in the circumnuclear regions of nine starburst galaxies observed with the Australia Telescope Compact Array. We detect HCN (1-0) and HCO+ (1-0) in seven of the nine galaxies and HNC (1-0) in four. Approximately 7 arcsec resolution maps of the circumnuclear molecular gas are presented. The velocity-integrated intensity ratios, HCO+ (1-0)/HCN (1-0) and HNC (1-0)/HCN (1-0), are calculated. Using these integrated intensity ratios and spatial intensity ratio maps, we identify photon-dominated regions (PDRs) in NGC 1097, NGC 1365 and NGC 1808. We find no galaxy which shows the PDR signature in only one part of the observed nuclear region. We also observe unusually strong HNC emission in NGC 5236, but it is not strong enough to be consistent with X-ray-dominated region chemistry. Rotation curves are derived for five of the galaxies and dynamical mass estimates of the inner regions of three of the galaxies are made.

  14. Symmetry energy in cold dense matter

    NASA Astrophysics Data System (ADS)

    Jeong, Kie Sang; Lee, Su Houng

    2016-01-01

    We calculate the symmetry energy in cold dense matter both in the normal quark phase and in the 2-color superconductor (2SC) phase. For the normal phase, the thermodynamic potential is calculated by using hard dense loop (HDL) resummation to leading order, where the dominant contribution comes from the longitudinal gluon rest mass. The effect of gluonic interaction on the symmetry energy, obtained from the thermodynamic potential, was found to be small. In the 2SC phase, the non-perturbative BCS paring gives enhanced symmetry energy as the gapped states are forced to be in the common Fermi sea reducing the number of available quarks that can contribute to the asymmetry. We used high density effective field theory to estimate the contribution of gluon interaction to the symmetry energy. Among the gluon rest masses in 2SC phase, only the Meissner mass has iso-spin dependence although the magnitude is much smaller than the Debye mass. As the iso-spin dependence of gluon rest masses is even smaller than the case in the normal phase, we expect that the contribution of gluonic interaction to the symmetry energy in the 2SC phase will be minimal. The different value of symmetry energy in each phase will lead to different prediction for the particle yields in heavy ion collision experiment.

  15. Dynamics of Kr in dense clathrate hydrates

    NASA Astrophysics Data System (ADS)

    Klug, D. D.; Tse, J. S.; Zhao, J. Y.; Sturhahn, W.; Alp, E. E.; Tulk, C. A.

    2011-05-01

    The dynamics of Kr atoms as guests in dense clathrate hydrate structures are investigated using site specific Kr83 nuclear resonant inelastic x-ray scattering (NRIXS) spectroscopy in combination with molecular dynamics simulations. The dense structure H hydrate and filled-ice structures are studied at high pressures in a diamond anvil high-pressure cell. The dynamics of Kr in the structure H clathrate hydrate quench recovered at 77 K is also investigated. The Kr phonon density of states obtained from the experimental NRIXS data are compared with molecular dynamics simulations. The temperature and pressure dependence of the phonon spectra provide details of the Kr dynamics in the clathrate hydrate cages. Comparison with the dynamics of Kr atoms in the low-pressure structure II obtained previously was made. The Lamb-Mossbauer factor obtained from NRIXS experiments and molecular dynamics calculations are in excellent agreement and are shown to yield unique information on the strength and temperature dependence of guest-host interactions.

  16. Nuclear quantum dynamics in dense hydrogen

    PubMed Central

    Kang, Dongdong; Sun, Huayang; Dai, Jiayu; Chen, Wenbo; Zhao, Zengxiu; Hou, Yong; Zeng, Jiaolong; Yuan, Jianmin

    2014-01-01

    Nuclear dynamics in dense hydrogen, which is determined by the key physics of large-angle scattering or many-body collisions between particles, is crucial for the dynamics of planet's evolution and hydrodynamical processes in inertial confinement confusion. Here, using improved ab initio path-integral molecular dynamics simulations, we investigated the nuclear quantum dynamics regarding transport behaviors of dense hydrogen up to the temperatures of 1 eV. With the inclusion of nuclear quantum effects (NQEs), the ionic diffusions are largely higher than the classical treatment by the magnitude from 20% to 146% as the temperature is decreased from 1 eV to 0.3 eV at 10 g/cm3, meanwhile, electrical and thermal conductivities are significantly lowered. In particular, the ionic diffusion is found much larger than that without NQEs even when both the ionic distributions are the same at 1 eV. The significant quantum delocalization of ions introduces remarkably different scattering cross section between protons compared with classical particle treatments, which explains the large difference of transport properties induced by NQEs. The Stokes-Einstein relation, Wiedemann-Franz law, and isotope effects are re-examined, showing different behaviors in nuclear quantum dynamics. PMID:24968754

  17. Thomson scattering in warm dense matter

    NASA Astrophysics Data System (ADS)

    Thiele, R.; Bornath, T.; F"Austlin, R. R.; Fortmann, C.; Glenzer, S.; Gregori, G.; Holst, B.; Tschentscher, T.; Schwarz, V.; Redmer, R.

    2009-11-01

    Free electron lasers employing scattering of high-brilliant, coherent photons in the extreme ultraviolet (VUV), e.g. at FLASH (DESY Hamburg) or LCLS (Stanford), allow for a systematic study of basic plasma properties in the region of warm dense matter (WDM). WDM is characterized by condensed matter-like densities and temperatures of several eV. Collective Thomson scattering with VUV or x-ray has demonstrated its capacity for robust measurements of the free electron density and temperature in WDM. Collective excitations like plasmons (``electron feature'') appear as maxima in the scattering signal. The respective frequencies can be related to the free electron density. Furthermore, the asymmetry of the red- and blue shifted plasmon intensity gives the electron temperature due to detailed balance. We treat collective Thomson scattering in the Born-Mermin-approximation which includes collisions and present a generalized Gross-Bohm dispersion for plasmas. The influence of plasma inhomogeneities on the scattering spectrum is studied by comparing density and temperature averaged scattering signals with calculations assuming homogeneous targets. For the ``ion feature,'' results of semi-classical hypernetted chain (HNC) calculations and of quantum molecular dynamics simulations are shown for dense beryllium.

  18. Solids flow rate measurement in dense slurries

    SciTech Connect

    Porges, K.G.; Doss, E.D.

    1993-09-01

    Accurate and rapid flow rate measurement of solids in dense slurries remains an unsolved technical problem, with important industrial applications in chemical processing plants and long-distance solids conveyance. In a hostile two-phase medium, such a measurement calls for two independent parameter determinations, both by non-intrusive means. Typically, dense slurries tend to flow in laminar, non-Newtonian mode, eliminating most conventional means that usually rely on calibration (which becomes more difficult and costly for high pressure and temperature media). These issues are reviewed, and specific solutions are recommended in this report. Detailed calculations that lead to improved measuring device designs are presented for both bulk density and average velocity measurements. Cross-correlation, chosen here for the latter task, has long been too inaccurate for practical applications. The cause and the cure of this deficiency are discussed using theory-supported modeling. Fluid Mechanics are used to develop the velocity profiles of laminar non-Newtonian flow in a rectangular duct. This geometry uniquely allows the design of highly accurate `capacitive` devices and also lends itself to gamma transmission densitometry on an absolute basis. An absolute readout, though of less accuracy, is also available from a capacitive densitometer and a pair of capacitive sensors yields signals suitable for cross-correlation velocity measurement.

  19. Testing ergodicity in dense granular systems

    NASA Astrophysics Data System (ADS)

    Gao, Guo-Jie; Blawzdziewicz, Jerzy; O'Hern, Corey

    2008-03-01

    The Edwards' entropy formalism provides a statistical mechanical framework for describing dense granular systems. Experiments on vibrated granular columns and numerical simulations of quasi- static shear flow of dense granular systems have provided indirect evidence that the Edwards' theory may accurately describe certain aspects of these systems. However, a fundamental assumption of the Edwards' description---that all mechanically stable (MS) granular packings at a given packing fraction and externally imposed stress are equally accessible---has not been explicitly tested. We investigate this assumption by generating all mechanically stable hard disk packings in small bidisperse systems using a protocol where we successively compress or decompress the system followed by energy minimization. We then apply quasi-static shear flow at zero pressure to these MS packings and record the MS packings that occur during the shear flow. We generate a complete library of the allowed MS packings at each value of shear strain and determine the frequency with which each MS packing occurs. We find that the MS packings do not occur with equal probability at any value of shear strain. In fact, in small systems we find that the evolution becomes periodic with a period that grows with system-size. Our studies show that ergodicity can be improved by either adding random fluctuations to the system or increasing the system size.

  20. Continuum equations for dense shallow granular flows

    NASA Astrophysics Data System (ADS)

    Kumaran, Viswanathan

    2015-11-01

    Simplified equations are derived for a granular flow in the `dense' limit where the volume fraction is close to that for dynamical arrest, and the `shallow' limit where the stream-wise length for flow development (L) is large compared to the cross-stream height (h). In the dense limit, the equations are simplified by taking advantage of the power-law divergence of the pair distribution function χ proportional to (ϕad - ϕ) - α, where ϕ is the volume fraction, and ϕad is the volume fraction for arrested dynamics. When the height h is much larger than the conduction length, the energy equation reduces to an algebraic balance between the rates of production and dissipation of energy, and the stress is proportional to the square of the strain rate (Bagnold law). The analysis reveals important differences between granular flows and the flows of Newtonian fluids. One important difference is that the Reynolds number (ratio of inertial and viscous terms) turns out to depend only on the layer height and Bagnold coefficients, and is independent of the flow velocity, because both the inertial terms in the conservation equations and the divergence of the stress depend on the square of the velocity/velocity gradients.

  1. Times Scales in Dense Granular Material

    NASA Astrophysics Data System (ADS)

    Zhang, Duan

    2005-07-01

    Forces in dense granular material are transmitted through particle contacts. The evolution of the contact stress is directly related to dynamical interaction forces between particles. Since particle contacts in a dense granular material are random, a statistical method is employed to describe and model their motions. It is found that the time scales of particle contacts determinate stress relaxation and the fluid- like or solid-like behavior of the material. Numerical simulations are performed to calculate statistical properties of particle interactions. Using results from the numerical simulations we examine the relationship between the averaged local deformation field and the macroscopic deformation field. We also examine the relationship between the averaged local interaction force and the averaged stress field in the material. Validities of the Voigt and the Reuss assumptions are examined; and extensions to these assumptions are studied. Numerical simulations show that tangential frictions between particles significantly increase the contact stress, while the direct contribution of the tangential force to the stress is small. This puzzling observation can be explained by dependency of the relaxation time on the tangential friction.

  2. Probing the Physical Structures of Dense Filaments

    NASA Astrophysics Data System (ADS)

    Li, Di

    2015-08-01

    Filament is a common feature in cosmological structures of various scales, ranging from dark matter cosmic web, galaxy clusters, inter-galactic gas flows, to Galactic ISM clouds. Even within cold dense molecular cores, filaments have been detected. Theories and simulations with (or without) different combination of physical principles, including gravity, thermal balance, turbulence, and magnetic field, can reproduce intriguing images of filaments. The ubiquity of filaments and the similarity in simulated ones make physical parameters, beyond dust column density, a necessity for understanding filament evolution. I report three projects attempting to measure physical parameters of filaments. We derive the volume density of a dense Taurus filament based on several cyanoacetylene transitions observed by GBT and ART. We measure the gas temperature of the OMC 2-3 filament based on combined GBT+VLA ammonia images. We also measured the sub-millimeter polarization vectors along OMC3. These filaments were found to be likely a cylinder-type structure, without dynamic heating, and likely accreting mass along the magnetic field lines.

  3. Quantum molecular dynamics simulations of dense matter

    SciTech Connect

    Collins, L.; Kress, J.; Troullier, N.; Lenosky, T.; Kwon, I.

    1997-12-31

    The authors have developed a quantum molecular dynamics (QMD) simulation method for investigating the properties of dense matter in a variety of environments. The technique treats a periodically-replicated reference cell containing N atoms in which the nuclei move according to the classical equations-of-motion. The interatomic forces are generated from the quantum mechanical interactions of the (between?) electrons and nuclei. To generate these forces, the authors employ several methods of varying sophistication from the tight-binding (TB) to elaborate density functional (DF) schemes. In the latter case, lengthy simulations on the order of 200 atoms are routinely performed, while for the TB, which requires no self-consistency, upwards to 1000 atoms are systematically treated. The QMD method has been applied to a variety cases: (1) fluid/plasma Hydrogen from liquid density to 20 times volume-compressed for temperatures of a thousand to a million degrees Kelvin; (2) isotopic hydrogenic mixtures, (3) liquid metals (Li, Na, K); (4) impurities such as Argon in dense hydrogen plasmas; and (5) metal/insulator transitions in rare gas systems (Ar,Kr) under high compressions. The advent of parallel versions of the methods, especially for fast eigensolvers, presage LDA simulations in the range of 500--1000 atoms and TB runs for tens of thousands of particles. This leap should allow treatment of shock chemistry as well as large-scale mixtures of species in highly transient environments.

  4. New binary quantum stabilizer codes from the binary extremal self-dual code

    NASA Astrophysics Data System (ADS)

    Wang, WeiLiang; Fan, YangYu; Li, RuiHu

    2015-08-01

    This paper is devoted to constructing binary quantum stabilizer codes based on the binary extremal self-dual code of parameters by Steane's construction. First, we provide an explicit generator matrix for the unique self-dual code to see it as a one-generator quasi-cyclic one and obtain six optimal self-orthogonal codes of parameters for with dual distances from 11 to 7 by puncturing the code. Second, a special type of subcode structures for self-orthogonal codes is investigated, and then ten derived dual chains are designed. Third, twelve binary quantum codes are constructed from these derived dual pairs within dual chains using Steane's construction. Ten of them, , , and , achieve as good parameters as the best known ones with comparable lengths and dimensions. Two other codes of parameters and are record breaking in the sense that they improve on the best known ones with the same lengths and dimensions in terms of distance.

  5. On a dense winding of the 2-dimensional torus

    NASA Astrophysics Data System (ADS)

    Kiselev, D. D.

    2016-04-01

    An important role in the solution of a class of optimal control problems is played by a certain polynomial of degree 2(n-1) of special form with integer coefficients. The linear independence of a family of k special roots of this polynomial over {Q} implies the existence of a solution of the original problem with optimal control in the form of a dense winding of a k-dimensional Clifford torus, which is traversed in finite time. In this paper, it is proved that for every integer n>3 one can take k to be equal to 2.Bibliography: 6 titles.

  6. Massive Star Formation: Characterising Infall and Outflow in dense cores.

    NASA Astrophysics Data System (ADS)

    Akhter, Shaila; Cunningham, Maria; Harvey-Smith, Lisa; Jones, Paul Andrew; Purcell, Cormac; Walsh, Andrew John

    2015-08-01

    Massive stars are some of the most important objects in the Universe, shaping the evolution of galaxies, creating chemical elements, and hence shaping the evolution of the Universe. However, the processes by which they form, and how they shape their environment during their birth processes, are not well understood. We are using NH3 data from the "The H2O Southern Galactic Plane Survey" (HOPS) to define the positions of dense cores/clumps of gas in the southern Galactic plane that are likely to form stars. Due to its effective critical density, NH3 can detect massive star forming regions effectively compared to other tracers. We did a comparative study with different methods for finding clumps and found Fellwalker as the best. We found ~ 10% of the star forming clumps with multiple components and ~ 90% clumps with single component along the line of sight. Then, using data from the "The Millimetre Astronomy Legacy Team 90 GHz" (MALT90) survey, we search for the presence of infall and outflow associated with these cores. We will subsequently use the "3D Molecular Line Radiative Transfer Code" (MOLLIE) to constrain properties of the infall and outflow, such as velocity and mass flow. The aim of the project is to determine how common infall and outflow are in star forming cores, hence providing valuable constraints on the timescales and physical process involved in massive star formation.

  7. ALEGRA-HEDP simulations of the dense plasma focus.

    SciTech Connect

    Flicker, Dawn G.; Kueny, Christopher S.; Rose, David V.

    2009-09-01

    We have carried out 2D simulations of three dense plasma focus (DPF) devices using the ALEGRA-HEDP code and validated the results against experiments. The three devices included two Mather-type machines described by Bernard et. al. and the Tallboy device currently in operation at NSTec in North Las Vegas. We present simulation results and compare to detailed plasma measurements for one Bernard device and to current and neutron yields for all three. We also describe a new ALEGRA capability to import data from particle-in-cell calculations of initial gas breakdown, which will allow the first ever simulations of DPF operation from the beginning of the voltage discharge to the pinch phase for arbitrary operating conditions and without assumptions about the early sheath structure. The next step in understanding DPF pinch physics must be three-dimensional modeling of conditions going into the pinch, and we have just launched our first 3D simulation of the best-diagnosed Bernard device.

  8. Coding of Neuroinfectious Diseases.

    PubMed

    Barkley, Gregory L

    2015-12-01

    Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue. PMID:26633789

  9. Model Children's Code.

    ERIC Educational Resources Information Center

    New Mexico Univ., Albuquerque. American Indian Law Center.

    The Model Children's Code was developed to provide a legally correct model code that American Indian tribes can use to enact children's codes that fulfill their legal, cultural and economic needs. Code sections cover the court system, jurisdiction, juvenile offender procedures, minor-in-need-of-care, and termination. Almost every Code section is…

  10. Rheology of Dense Granular Mixtures and Slurries

    NASA Astrophysics Data System (ADS)

    Tewoldebrhan, Bereket Yohannes

    Dense granular flows, characterized by multiple contacts between grains, are common in many industrial processes and natural events, such as debris flows. Understanding the characteristics of these flows is crucial to predict quantities such as bedrock erosion and distance traveled by debris flows. However, the rheological properties of these flows are complicated due to wide particle size distribution and presence of interstitial fluids. Models for dense sheared granular materials indicate that their rheological properties depend on particle size, but the representative particle size for mixtures is not obvious. Using the discrete element method (DEM) we study sheared granular binary mixtures in a Couette cell to determine the relationship and rheological parameters such as stress and effective coefficient of friction and particle size distribution. The results indicate that the stress does not depend monotonically on the average particle size as it does in models derived from simple dimensional consideration. The stress has an additional dependence on a measure of the effective free volume per particle that is adapted from an expression for packing of monosized particles near the jammed state. The effective friction also has a complicated dependence on particle size distribution. For these systems of relatively hard particles, these relationships are governed largely by the ratio between average collision times and mean-free-path times. The characteristics of shallow free surface flows, important for applications such as debris flows, are different from confined systems. To address this, we also study shallow granular flows in a rotating drum. The stress at the boundary, height profiles and segregation patterns from DEM simulations are quantitatively similar to the results obtained from physical experiments of shallow granular flows in rotating drums. Individual particle-bed impacts rather than enduring contacts dominate the largest forces on the drum bed, which

  11. DENSE: efficient and prior knowledge-driven discovery of phenotype-associated protein functional modules

    PubMed Central

    2011-01-01

    proteins are likely associated with the target phenotype. The DENSE code can be downloaded from http://www.freescience.org/cs/DENSE/ PMID:22024446

  12. Implementation and Re nement of a Comprehensive Model for Dense Granular Flows

    SciTech Connect

    Sundaresan, Sankaran

    2015-09-30

    Dense granular ows are ubiquitous in both natural and industrial processes. They manifest three di erent ow regimes, each exhibiting its own dependence on solids volume fraction, shear rate, and particle-level properties. This research project sought to develop continuum rheological models for dense granular ows that bridges multiple regimes of ow, implement them in open-source platforms for gas-particle ows and perform test simulations. The rst phase of the research covered in this project involved implementation of a steady- shear rheological model that bridges quasi-static, intermediate and inertial regimes of ow into MFIX (Multiphase Flow with Interphase eXchanges - a general purpose computer code developed at the National Energy Technology Laboratory). MFIX simulations of dense granular ows in hourglass-shaped hopper were then performed as test examples. The second phase focused on formulation of a modi ed kinetic theory for frictional particles that can be used over a wider range of particle volume fractions and also apply for dynamic, multi- dimensional ow conditions. To guide this work, simulations of simple shear ows of identical mono-disperse spheres were also performed using the discrete element method. The third phase of this project sought to develop and implement a more rigorous treatment of boundary e ects. Towards this end, simulations of simple shear ows of identical mono-disperse spheres con ned between parallel plates were performed and analyzed to formulate compact wall boundary conditions that can be used for dense frictional ows at at frictional boundaries. The fourth phase explored the role of modest levels of cohesive interactions between particles on the dense phase rheology. The nal phase of this project focused on implementation and testing of the modi ed kinetic theory in MFIX and running bin-discharge simulations as test examples.

  13. Dense Array Effects in SWIR HgCdTe Photodetecting Arrays

    NASA Astrophysics Data System (ADS)

    Wichman, A. R.; Pinkie, B.; Bellotti, E.

    2015-09-01

    This paper presents results from three-dimensional quantitative modeling on dense, moderately doped [ N D ( N A) = 5 × 1015 cm-3] short-wave infrared (SWIR) p + n and n + p Hg1- x Cd x Te double planar heterostructure photodetecting arrays with absorber x = 0.451 and cap x = 0.55. At uniform reverse bias, the competition for minority carriers between closely spaced diodes preserves densities below equilibrium levels throughout the absorber. This carrier suppression has several consequences in addition to suppressing dark current by constraining the minority-carrier gradients at each diode junction. First, the dense arrays maintain volume-average negative net radiative recombination rates (negative luminescence) roughly an order of magnitude larger than comparably biased isolated diodes. Second, the negative excess minority-carrier densities suppress the volume-average net Auger recombination rate by roughly an order of magnitude in dense n-type HgCdTe arrays compared with a single diode. Third, the long minority electron diffusion lengths in the p-type HgCdTe absorber not only suppress lateral diffusion currents, but do so in a manner that provides negative differential resistance. By suppressing intrinsic recombination rates, or lateral diffusion currents, each effect can contribute to increasing R 0 A products in SWIR HgCdTe dense arrays. These effects should be considered when optimizing device structures for pitch, thickness, feature size, doping, and bias points.

  14. Simulink Code Generation: Tutorial for Generating C Code from Simulink Models using Simulink Coder

    NASA Technical Reports Server (NTRS)

    MolinaFraticelli, Jose Carlos

    2012-01-01

    This document explains all the necessary steps in order to generate optimized C code from Simulink Models. This document also covers some general information on good programming practices, selection of variable types, how to organize models and subsystems, and finally how to test the generated C code and compare it with data from MATLAB.

  15. Analysis of quantum error-correcting codes: Symplectic lattice codes and toric codes

    NASA Astrophysics Data System (ADS)

    Harrington, James William

    Quantum information theory is concerned with identifying how quantum mechanical resources (such as entangled quantum states) can be utilized for a number of information processing tasks, including data storage, computation, communication, and cryptography. Efficient quantum algorithms and protocols have been developed for performing some tasks (e.g. , factoring large numbers, securely communicating over a public channel, and simulating quantum mechanical systems) that appear to be very difficult with just classical resources. In addition to identifying the separation between classical and quantum computational power, much of the theoretical focus in this field over the last decade has been concerned with finding novel ways of encoding quantum information that are robust against errors, which is an important step toward building practical quantum information processing devices. In this thesis I present some results on the quantum error-correcting properties of oscillator codes (also described as symplectic lattice codes) and toric codes. Any harmonic oscillator system (such as a mode of light) can be encoded with quantum information via symplectic lattice codes that are robust against shifts in the system's continuous quantum variables. I show the existence of lattice codes whose achievable rates match the one-shot coherent information over the Gaussian quantum channel. Also, I construct a family of symplectic self-dual lattices and search for optimal encodings of quantum information distributed between several oscillators. Toric codes provide encodings of quantum information into two-dimensional spin lattices that are robust against local clusters of errors and which require only local quantum operations for error correction. Numerical simulations of this system under various error models provide a calculation of the accuracy threshold for quantum memory using toric codes, which can be related to phase transitions in certain condensed matter models. I also present

  16. Oxygen ion-conducting dense ceramic

    DOEpatents

    Balachandran, Uthamalingam; Kleefisch, Mark S.; Kobylinski, Thaddeus P.; Morissette, Sherry L.; Pei, Shiyou

    1996-01-01

    Preparation, structure, and properties of mixed metal oxide compositions containing at least strontium, cobalt, iron and oxygen are described. The crystalline mixed metal oxide compositions of this invention have, for example, structure represented by Sr.sub..alpha. (Fe.sub.1-x Co.sub.x).sub..alpha.+.beta. O.sub..delta. where x is a number in a range from 0.01 to about 1, .alpha. is a number in a range from about 1 to about 4, .beta. is a number in a range upward from 0 to about 20, and .delta. is a number which renders the compound charge neutral, and wherein the composition has a non-perovskite structure. Use of the mixed metal oxides in dense ceramic membranes which exhibit oxygen ionic conductivity and selective oxygen separation, are described as well as their use in separation of oxygen from an oxygen-containing gaseous mixture.

  17. Oxygen ion-conducting dense ceramic

    DOEpatents

    Balachandran, Uthamalingam; Kleefisch, Mark S.; Kobylinski, Thaddeus P.; Morissette, Sherry L.; Pei, Shiyou

    1997-01-01

    Preparation, structure, and properties of mixed metal oxide compositions containing at least strontium, cobalt, iron and oxygen are described. The crystalline mixed metal oxide compositions of this invention have, for example, structure represented by Sr.sub..alpha. (Fe.sub.1-x Co.sub.x).sub..alpha.+.beta. O.sub..delta. where x is a number in a range from 0.01 to about 1, .alpha. is a number in a range from about 1 to about 4, .beta. is a number in a range upward from 0 to about 20, and .delta. is a number which renders the compound charge neutral, and wherein the composition has a non-perovskite structure. Use of the mixed metal oxides in dense ceramic membranes which exhibit oxygen ionic conductivity and selective oxygen separation, are described as well as their use in separation of oxygen from an oxygen-containing gaseous mixture.

  18. Ion beam driven warm dense matter experiments

    NASA Astrophysics Data System (ADS)

    Bieniosek, F. M.; Ni, P. A.; Leitner, M.; Roy, P. K.; More, R.; Barnard, J. J.; Kireeff Covo, M.; Molvik, A. W.; Yoneda, H.

    2007-11-01

    We report plans and experimental results in ion beam-driven warm dense matter (WDM) experiments. Initial experiments at LBNL are at 0.3-1 MeV K+ beam (below the Bragg peak), increasing toward the Bragg peak in future versions of the accelerator. The WDM conditions are envisioned to be achieved by combined longitudinal and transverse neutralized drift compression to provide a hot spot on the target with a beam spot size of about 1 mm, and pulse length about 1-2 ns. The range of the beams in solid matter targets is about 1 micron, which can be lengthened by using porous targets at reduced density. Initial experiments include an experiment to study transient darkening at LBNL; and a porous target experiment at GSI heated by intense heavy-ion beams from the SIS 18 storage ring. Further experiments will explore target temperature and other properties such as electrical conductivity to investigate phase transitions and the critical point.

  19. Granular flow model for dense planetary rings

    SciTech Connect

    Borderies, N.; Goldreich, P.; Tremaine, S.

    1985-09-01

    In the present study of the viscosity of a differentially rotating particle disk, in the limiting case where the particles are densely packed and their collective behavior resembles that of a liquid, the pressure tensor is derived from both the equations of hydrodynamics and a simple kinetic model of collisions due to Haff (1983). Density waves and narrow circular rings are unstable if the liquid approximation applies, and the consequent nonlinear perturbations may generate splashing of the ring material in the vertical direction. These results are pertinent to the origin of the ellipticities of ringlets, the nonaxisymmetric features near the outer edge of the Saturn B ring, and unexplained residuals in kinematic models of the Saturn and Uranus rings. 24 references.

  20. Constitutive relations for steady, dense granular flows

    NASA Astrophysics Data System (ADS)

    Vescovi, D.; Berzi, D.; di Prisco, C. G.

    2011-12-01

    In the recent past, the flow of dense granular materials has been the subject of many scientific works; this is due to the large number of natural phenomena involving solid particles flowing at high concentration (e.g., debris flows and landslides). In contrast with the flow of dilute granular media, where the energy is essentially dissipated in binary collisions, the flow of dense granular materials is characterized by multiple, long-lasting and frictional contacts among the particles. The work focuses on the mechanical response of dry granular materials under steady, simple shear conditions. In particular, the goal is to obtain a complete rheology able to describe the material behavior within the entire range of concentrations for which the flow can be considered dense. The total stress is assumed to be the linear sum of a frictional and a kinetic component. The frictional and the kinetic contribution are modeled in the context of the critical state theory [8, 10] and the kinetic theory of dense granular gases [1, 3, 7], respectively. In the critical state theory, the granular material approaches a certain attractor state, independent on the initial arrangement, characterized by the capability of developing unlimited shear strains without any change in the concentration. Given that a disordered granular packing exists only for a range of concentration between the random loose and close packing [11], a form for the concentration dependence of the frictional normal stress that makes the latter vanish at the random loose packing is defined. In the kinetic theory, the particles are assumed to interact through instantaneous, binary and uncorrelated collisions. A new state variable of the problem is introduced, the granular temperature, which accounts for the velocity fluctuations. The model has been extended to account for the decrease in the energy dissipation due to the existence of correlated motion among the particles [5, 6] and to deal with non

  1. Nonplanar electrostatic shock waves in dense plasmas

    SciTech Connect

    Masood, W.; Rizvi, H.

    2010-02-15

    Two-dimensional quantum ion acoustic shock waves (QIASWs) are studied in an unmagnetized plasma consisting of electrons and ions. In this regard, a nonplanar quantum Kadomtsev-Petviashvili-Burgers (QKPB) equation is derived using the small amplitude perturbation expansion method. Using the tangent hyperbolic method, an analytical solution of the planar QKPB equation is obtained and subsequently used as the initial profile to numerically solve the nonplanar QKPB equation. It is observed that the increasing number density (and correspondingly the quantum Bohm potential) and kinematic viscosity affect the propagation characteristics of the QIASW. The temporal evolution of the nonplanar QIASW is investigated both in Cartesian and polar planes and the results are discussed from the numerical stand point. The results of the present study may be applicable in the study of propagation of small amplitude localized electrostatic shock structures in dense astrophysical environments.

  2. Infrared and Submilllimeter Studies of Dense Cores

    NASA Astrophysics Data System (ADS)

    Bourke, Tyler L.

    2014-07-01

    Dense Cores are the birthplace of stars, and so understanding their structure and evolution is key to understanding star formation. Information on the density, temperature, and motions within cores are needed to describe these properties, and are obtained through continuum and line observations at far infrared and submm/mm wavelengths. Recent observations of dust emission with Herschel and molecular line observations with single-dish telescopes and interferometers provide the wavelength coverage and resolution to finally map core properties without appealing to spherical simplifications. Although large scale Herschel observations reveal numerous filaments in molecular clouds which are well described by cylindrical geometries, cores are still modeled as spherical entities. A few examples of other core geometries exist in the literature, and the wealth of new data on cloud filaments demand that non-spherical models receive more attention in future studies. This talk will examine the evidence for non-spherical cores and their connection to the filaments from which they form.

  3. Plasmon resonance in warm dense matter.

    PubMed

    Thiele, R; Bornath, T; Fortmann, C; Höll, A; Redmer, R; Reinholz, H; Röpke, G; Wierling, A; Glenzer, S H; Gregori, G

    2008-08-01

    Collective Thomson scattering with extreme ultraviolet light or x rays is shown to allow for a robust measurement of the free electron density in dense plasmas. Collective excitations like plasmons appear as maxima in the scattering signal. Their frequency position can directly be related to the free electron density. The range of applicability of the standard Gross-Bohm dispersion relation and of an improved dispersion relation in comparison to calculations based on the dielectric function in random phase approximation is investigated. More important, this well-established treatment of Thomson scattering on free electrons is generalized in the Born-Mermin approximation by including collisions. We show that, in the transition region from collective to noncollective scattering, the consideration of collisions is important. PMID:18850950

  4. Plasmon resonance in warm dense matter

    NASA Astrophysics Data System (ADS)

    Thiele, R.; Bornath, T.; Fortmann, C.; Höll, A.; Redmer, R.; Reinholz, H.; Röpke, G.; Wierling, A.; Glenzer, S. H.; Gregori, G.

    2008-08-01

    Collective Thomson scattering with extreme ultraviolet light or x rays is shown to allow for a robust measurement of the free electron density in dense plasmas. Collective excitations like plasmons appear as maxima in the scattering signal. Their frequency position can directly be related to the free electron density. The range of applicability of the standard Gross-Bohm dispersion relation and of an improved dispersion relation in comparison to calculations based on the dielectric function in random phase approximation is investigated. More important, this well-established treatment of Thomson scattering on free electrons is generalized in the Born-Mermin approximation by including collisions. We show that, in the transition region from collective to noncollective scattering, the consideration of collisions is important.

  5. Performance Evaluation of Dense Gas Dispersion Models.

    NASA Astrophysics Data System (ADS)

    Touma, Jawad S.; Cox, William M.; Thistle, Harold; Zapert, James G.

    1995-03-01

    This paper summarizes the results of a study to evaluate the performance of seven dense gas dispersion models using data from three field experiments. Two models (DEGADIS and SLAB) are in the public domain and the other five (AIRTOX, CHARM, FOCUS, SAFEMODE, and TRACE) are proprietary. The field data used are the Desert Tortoise pressurized ammonia releases, Burro liquefied natural gas spill tests, and the Goldfish anhydrous hydrofluoric acid spill experiments. Desert Tortoise and Goldfish releases were simulated as horizontal jet releases, and Burro as a liquid pool. Performance statistics were used to compare maximum observed concentrations and plume half-width to those predicted by each model. Model performance varied and no model exhibited consistently good performance across all three databases. However, when combined across the three databases, all models performed within a factor of 2. Problems encountered are discussed in order to help future investigators.

  6. Statistical mechanics of dense granular media

    NASA Astrophysics Data System (ADS)

    Nicodemi, M.; Coniglio, A.; de Candia, A.; Fierro, A.; Ciamarra, M. Pica; Tarzia, M.

    2005-12-01

    We discuss some recent results on Statistical Mechanics approach to dense granular media. In particular, by analytical mean field investigation we derive the phase diagram of monodisperse and bydisperse granular assemblies. We show that "jamming" corresponds to a phase transition from a "fluid" to a "glassy" phase, observed when crystallization is avoided. The nature of such a "glassy" phase turns out to be the same found in mean field models for glass formers. This gives quantitative evidence to the idea of a unified description of the "jamming" transition in granular media and thermal systems, such as glasses. We also discuss mixing/segregation transitions in binary mixtures and their connections to phase separation and "geometric" effects.

  7. Statistical mechanics of dense granular media

    NASA Astrophysics Data System (ADS)

    Coniglio, A.; Fierro, A.; Nicodemi, M.; Pica Ciamarra, M.; Tarzia, M.

    2005-06-01

    We discuss some recent results on the statistical mechanics approach to dense granular media. In particular, by analytical mean field investigation we derive the phase diagram of monodisperse and bidisperse granular assemblies. We show that 'jamming' corresponds to a phase transition from a 'fluid' to a 'glassy' phase, observed when crystallization is avoided. The nature of such a 'glassy' phase turns out to be the same as found in mean field models for glass formers. This gives quantitative evidence for the idea of a unified description of the 'jamming' transition in granular media and thermal systems, such as glasses. We also discuss mixing/segregation transitions in binary mixtures and their connections to phase separation and 'geometric' effects.

  8. Kaon condensation in dense stellar matter

    SciTech Connect

    Lee, Chang-Hwan; Rho, M. |

    1995-03-01

    This article combines two talks given by the authors and is based on Works done in collaboration with G.E. Brown and D.P. Min on kaon condensation in dense baryonic medium treated in chiral perturbation theory using heavy-baryon formalism. It contains, in addition to what was recently published, astrophysical backgrounds for kaon condensation discussed by Brown and Bethe, a discussion on a renormalization-group analysis to meson condensation worked out together with H.K. Lee and S.J. Sin, and the recent results of K.M. Westerberg in the bound-state approach to the Skyrme model. Negatively charged kaons are predicted to condense at a critical density 2 {approx_lt} {rho}/{rho}o {approx_lt} 4, in the range to allow the intriguing new phenomena predicted by Brown and Bethe to take place in compact star matter.

  9. Engineered circuit QED with dense resonant modes

    NASA Astrophysics Data System (ADS)

    Wilhelm, Frank; Egger, Daniel

    2013-03-01

    In circuit quantum electrodynamics even in the ultrastrong coupling regime, strong quasi-resonant interaction typically involves only one mode of the resonator as the mode spacing is comparable to the frequency of the mode. We are going to present an engineered hybrid transmission line consisting of a left-handed and a right-handed portion that has a low-frequency van-Hove singularity hence showing a dense mode spectrum at an experimentally accessible point. This gives rise to strong multi-mode coupling and can be utilized in multiple ways to create strongly correlated microwave photons. Supported by DARPA through the QuEST program and by NSERC Discovery grants

  10. Prediction of viscosity of dense fluid mixtures

    NASA Astrophysics Data System (ADS)

    Royal, Damian D.; Vesovic, Velisa; Trusler, J. P. Martin; Wakeham, William. A.

    The Vesovic-Wakeham (VW) method of predicting the viscosity of dense fluid mixtures has been improved by implementing new mixing rules based on the rigid sphere formalism. The proposed mixing rules are based on both Lebowitz's solution of the Percus-Yevick equation and on the Carnahan-Starling equation. The predictions of the modified VW method have been compared with experimental viscosity data for a number of diverse fluid mixtures: natural gas, hexane + hheptane, hexane + octane, cyclopentane + toluene, and a ternary mixture of hydrofluorocarbons (R32 + R125 + R134a). The results indicate that the proposed improvements make possible the extension of the original VW method to liquid mixtures and to mixtures containing polar species, while retaining its original accuracy.

  11. Coherent neutrino interactions in a dense medium

    NASA Astrophysics Data System (ADS)

    Kiers, Ken; Weiss, Nathan

    1997-11-01

    Motivated by the effect of matter on neutrino oscillations (the MSW effect) we study in more detail the propagation of neutrinos in a dense medium. The dispersion relation for massive neutrinos in a medium is known to have a minimum at nonzero momentum p~GFρ/2. We study in detail the origin and consequences of this dispersion relation for both Dirac and Majorana neutrinos both in a toy model with only neutral currents and a single neutrino flavor and in a realistic ``standard model'' with two neutrino flavors. We find that for a range of neutrino momenta near the minimum of the dispersion relation, Dirac neutrinos are trapped by their coherent interactions with the medium. This effect does not lead to the trapping of Majorana neutrinos.

  12. Supernovae in dense and dusty environments

    NASA Astrophysics Data System (ADS)

    Kankare, Erkki

    2013-02-01

    In this doctoral thesis supernovae in dense and dusty environments are studied, with an emphasis on core-collapse supernovae. The articles included in the thesis aim to increase our understanding of supernovae interacting with the circumstellar material and their place in stellar evolution. The results obtained have also importance in deriving core-collapse supernova rates with reliable extinction corrections, which are directly related to star formation rates and galaxy evolution. In other words, supernovae are used as a tool in the research of both stellar and galaxy evolution, both of which can be considered as fundamental basics for our understanding of the whole Universe. A detailed follow-up study of the narrow-line supernova 2009kn is presented in paper I, and its similarity to another controversial transient, supernova 1994W, is shown. These objects are clearly strongly interacting with relatively dense circumstellar matter, however their physical origin is quite uncertain. In paper I different explosion models are discussed. Discoveries from a search programme for highly obscured supernovae in dusty luminous infrared galaxies are presented in papers II and III. The search was carried out using laser guide star adaptive optics monitoring at near-infrared wavelengths. By comparing multi-band photometric follow-up observations to template light curves, the likely types and the host galaxy extinctions for the four supernovae discovered were derived. The optical depth of normal spiral galaxy disks were studied statistically and reported in paper IV. This is complementary work to studies such as the one presented in paper V, where the missing fractions of core-collapse supernovae were derived for both normal spiral galaxies and luminous infrared galaxies, to be used for correcting supernova rates both locally and as a function of redshift.

  13. Code Samples Used for Complexity and Control

    NASA Astrophysics Data System (ADS)

    Ivancevic, Vladimir G.; Reid, Darryn J.

    2015-11-01

    The following sections are included: * MathematicaⓇ Code * Generic Chaotic Simulator * Vector Differential Operators * NLS Explorer * 2C++ Code * C++ Lambda Functions for Real Calculus * Accelerometer Data Processor * Simple Predictor-Corrector Integrator * Solving the BVP with the Shooting Method * Linear Hyperbolic PDE Solver * Linear Elliptic PDE Solver * Method of Lines for a Set of the NLS Equations * C# Code * Iterative Equation Solver * Simulated Annealing: A Function Minimum * Simple Nonlinear Dynamics * Nonlinear Pendulum Simulator * Lagrangian Dynamics Simulator * Complex-Valued Crowd Attractor Dynamics * Freeform Fortran Code * Lorenz Attractor Simulator * Complex Lorenz Attractor * Simple SGE Soliton * Complex Signal Presentation * Gaussian Wave Packet * Hermitian Matrices * Euclidean L2-Norm * Vector/Matrix Operations * Plain C-Code: Levenberg-Marquardt Optimizer * Free Basic Code: 2D Crowd Dynamics with 3000 Agents

  14. Protograph-Based Raptor-Like Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Chen, Tsung-Yi; Wang, Jiadong; Wesel, Richard D.

    2014-01-01

    Theoretical analysis has long indicated that feedback improves the error exponent but not the capacity of pointto- point memoryless channels. The analytic and empirical results indicate that at short blocklength regime, practical rate-compatible punctured convolutional (RCPC) codes achieve low latency with the use of noiseless feedback. In 3GPP, standard rate-compatible turbo codes (RCPT) did not outperform the convolutional codes in the short blocklength regime. The reason is the convolutional codes for low number of states can be decoded optimally using Viterbi decoder. Despite excellent performance of convolutional codes at very short blocklengths, the strength of convolutional codes does not scale with the blocklength for a fixed number of states in its trellis.

  15. Accumulate repeat accumulate codes

    NASA Technical Reports Server (NTRS)

    Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung

    2004-01-01

    In this paper we propose an innovative channel coding scheme called 'Accumulate Repeat Accumulate codes' (ARA). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes, thus belief propagation can be used for iterative decoding of ARA codes on a graph. The structure of encoder for this class can be viewed as precoded Repeat Accumulate (RA) code or as precoded Irregular Repeat Accumulate (IRA) code, where simply an accumulator is chosen as a precoder. Thus ARA codes have simple, and very fast encoder structure when they representing LDPC codes. Based on density evolution for LDPC codes through some examples for ARA codes, we show that for maximum variable node degree 5 a minimum bit SNR as low as 0.08 dB from channel capacity for rate 1/2 can be achieved as the block size goes to infinity. Thus based on fixed low maximum variable node degree, its threshold outperforms not only the RA and IRA codes but also the best known LDPC codes with the dame maximum node degree. Furthermore by puncturing the accumulators any desired high rate codes close to code rate 1 can be obtained with thresholds that stay close to the channel capacity thresholds uniformly. Iterative decoding simulation results are provided. The ARA codes also have projected graph or protograph representation that allows for high speed decoder implementation.

  16. Visualizing expanding warm dense matter heated by laser-generated ion beams

    SciTech Connect

    Bang, Woosuk

    2015-08-24

    This PowerPoint presentation concluded with the following. We calculated the expected heating per atom and temperatures of various target materials using a Monte Carlo simulation code and SESAME EOS tables. We used aluminum ion beams to heat gold and diamond uniformly and isochorically. A streak camera imaged the expansion of warm dense gold (5.5 eV) and diamond (1.7 eV). GXI-X recorded all 16 x-ray images of the unheated gold bar targets proving that it could image the motion of the gold/diamond interface of the proposed target.

  17. The Effects of Stellar Dynamics on the Evolution of Young, Dense Stellar Systems

    NASA Astrophysics Data System (ADS)

    Belkus, H.; van Bever, J.; Vanbeveren, D.

    In this paper, we report on first results of a project in Brussels in which we study the effects of stellar dynamics on the evolution of young dense stellar systems using 3 decades of expertise in massive-star evolution and our population (number and spectral) synthesis code. We highlight an unconventionally formed object scenario (UFO-scenario) for Wolf Rayet binaries and study the effects of a luminous blue variable-type instability wind mass-loss formalism on the formation of intermediate-mass black holes.

  18. Coset Codes Viewed as Terminated Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Fossorier, Marc P. C.; Lin, Shu

    1996-01-01

    In this paper, coset codes are considered as terminated convolutional codes. Based on this approach, three new general results are presented. First, it is shown that the iterative squaring construction can equivalently be defined from a convolutional code whose trellis terminates. This convolutional code determines a simple encoder for the coset code considered, and the state and branch labelings of the associated trellis diagram become straightforward. Also, from the generator matrix of the code in its convolutional code form, much information about the trade-off between the state connectivity and complexity at each section, and the parallel structure of the trellis, is directly available. Based on this generator matrix, it is shown that the parallel branches in the trellis diagram of the convolutional code represent the same coset code C(sub 1), of smaller dimension and shorter length. Utilizing this fact, a two-stage optimum trellis decoding method is devised. The first stage decodes C(sub 1), while the second stage decodes the associated convolutional code, using the branch metrics delivered by stage 1. Finally, a bidirectional decoding of each received block starting at both ends is presented. If about the same number of computations is required, this approach remains very attractive from a practical point of view as it roughly doubles the decoding speed. This fact is particularly interesting whenever the second half of the trellis is the mirror image of the first half, since the same decoder can be implemented for both parts.

  19. Asymmetric effect on single-file dense pedestrian flow

    NASA Astrophysics Data System (ADS)

    Kuang, Hua; Cai, Mei-Jing; Li, Xing-Li; Song, Tao

    2015-11-01

    In this paper, an extended optimal velocity model is proposed to simulate single-file dense pedestrian flow by considering asymmetric interaction (i.e. attractive force and repulsive force), which depends on the different distances between pedestrians. The stability condition of this model is obtained by using the linear stability theory. The phase diagram comparison and analysis show that asymmetric effect plays an important role in strengthening the stabilization of system. The modified Korteweg-de Vries (mKdV) equation near the critical point is derived by applying the reductive perturbation method. The pedestrian jam could be described by the kink-antikink soliton solution for the mKdV equation. From the simulation of space-time evolution of the pedestrians distance, it can be found that the asymmetric interaction is more efficient compared to the symmetric interaction in suppressing the pedestrian jam. Furthermore, the simulation results are consistent with the theoretical analysis as well as reproduce experimental phenomena better.

  20. Development and evaluation of a dense gas plume model

    SciTech Connect

    Matthias, C.S.

    1994-12-31

    The dense gas plume model (continuous release) described in this paper has been developed using the same principles as for a dense gas puff model (instantaneous release). It is a box model for which the main goal is to predict the height H, width W, and maximum concentration C{sub b} for a steady dense plume. A secondary goal is to distribute the mass more realistically by empirically attaching Gaussian distributions in the horizontal and vertical directions. For ease of reference, the models and supporting programs will be referred to as DGM (Dense Gas Models).

  1. Improving Discrete-Sensitivity-Based Approach for Practical Design Optimization

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay; Cordero, Yvette; Pandya, Mohagna J.

    1997-01-01

    In developing the automated methodologies for simulation-based optimal shape designs, their accuracy, efficiency and practicality are the defining factors to their success. To that end, four recent improvements to the building blocks of such a methodology, intended for more practical design optimization, have been reported. First, in addition to a polynomial-based parameterization, a partial differential equation (PDE) based parameterization was shown to be a practical tool for a number of reasons. Second, an alternative has been incorporated to one of the tedious phases of developing such a methodology, namely, the automatic differentiation of the computer code for the flow analysis in order to generate the sensitivities. Third, by extending the methodology for the thin-layer Navier-Stokes (TLNS) based flow simulations, the more accurate flow physics was made available. However, the computer storage requirement for a shape optimization of a practical configuration with the -fidelity simulations (TLNS and dense-grid based simulations), required substantial computational resources. Therefore, the final improvement reported herein responded to this point by including the alternating-direct-implicit (ADI) based system solver as an alternative to the preconditioned biconjugate (PbCG) and other direct solvers.

  2. Discussion on LDPC Codes and Uplink Coding

    NASA Technical Reports Server (NTRS)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.

  3. Manually operated coded switch

    DOEpatents

    Barnette, Jon H.

    1978-01-01

    The disclosure relates to a manually operated recodable coded switch in which a code may be inserted, tried and used to actuate a lever controlling an external device. After attempting a code, the switch's code wheels must be returned to their zero positions before another try is made.

  4. Binary primitive alternant codes

    NASA Technical Reports Server (NTRS)

    Helgert, H. J.

    1975-01-01

    In this note we investigate the properties of two classes of binary primitive alternant codes that are generalizations of the primitive BCH codes. For these codes we establish certain equivalence and invariance relations and obtain values of d and d*, the minimum distances of the prime and dual codes.

  5. Algebraic geometric codes

    NASA Technical Reports Server (NTRS)

    Shahshahani, M.

    1991-01-01

    The performance characteristics are discussed of certain algebraic geometric codes. Algebraic geometric codes have good minimum distance properties. On many channels they outperform other comparable block codes; therefore, one would expect them eventually to replace some of the block codes used in communications systems. It is suggested that it is unlikely that they will become useful substitutes for the Reed-Solomon codes used by the Deep Space Network in the near future. However, they may be applicable to systems where the signal to noise ratio is sufficiently high so that block codes would be more suitable than convolutional or concatenated codes.

  6. Detection of the dominant direction of information flow and feedback links in densely interconnected regulatory networks

    PubMed Central

    Ispolatov, Iaroslav; Maslov, Sergei

    2008-01-01

    Background Finding the dominant direction of flow of information in densely interconnected regulatory or signaling networks is required in many applications in computational biology and neuroscience. This is achieved by first identifying and removing links which close up feedback loops in the original network and hierarchically arranging nodes in the remaining network. In mathematical language this corresponds to a problem of making a graph acyclic by removing as few links as possible and thus altering the original graph in the least possible way. The exact solution of this problem requires enumeration of all cycles and combinations of removed links, which, as an NP-hard problem, is computationally prohibitive even for modest-size networks. Results We introduce and compare two approximate numerical algorithms for solving this problem: the probabilistic one based on a simulated annealing of the hierarchical layout of the network which minimizes the number of "backward" links going from lower to higher hierarchical levels, and the deterministic, "greedy" algorithm that sequentially cuts the links that participate in the largest number of feedback cycles. We find that the annealing algorithm outperforms the deterministic one in terms of speed, memory requirement, and the actual number of removed links. To further improve a visual perception of the layout produced by the annealing algorithm, we perform an additional minimization of the length of hierarchical links while keeping the number of anti-hierarchical links at their minimum. The annealing algorithm is then tested on several examples of regulatory and signaling networks/pathways operating in human cells. Conclusion The proposed annealing algorithm is powerful enough to performs often optimal layouts of protein networks in whole organisms, consisting of around ~104 nodes and ~105 links, while the applicability of the greedy algorithm is limited to individual pathways with ~100 vertices. The considered examples

  7. Phonological Codes Constrain Output of Orthographic Codes via Sublexical and Lexical Routes in Chinese Written Production

    PubMed Central

    Wang, Cheng; Zhang, Qingfang

    2015-01-01

    To what extent do phonological codes constrain orthographic output in handwritten production? We investigated how phonological codes constrain the selection of orthographic codes via sublexical and lexical routes in Chinese written production. Participants wrote down picture names in a picture-naming task in Experiment 1or response words in a symbol—word associative writing task in Experiment 2. A sublexical phonological property of picture names (phonetic regularity: regular vs. irregular) in Experiment 1and a lexical phonological property of response words (homophone density: dense vs. sparse) in Experiment 2, as well as word frequency of the targets in both experiments, were manipulated. A facilitatory effect of word frequency was found in both experiments, in which words with high frequency were produced faster than those with low frequency. More importantly, we observed an inhibitory phonetic regularity effect, in which low-frequency picture names with regular first characters were slower to write than those with irregular ones, and an inhibitory homophone density effect, in which characters with dense homophone density were produced more slowly than those with sparse homophone density. Results suggested that phonological codes constrained handwritten production via lexical and sublexical routes. PMID:25879662

  8. An experimental study of dense aerosol aggregations

    NASA Astrophysics Data System (ADS)

    Dhaubhadel, Rajan

    We demonstrated that an aerosol can gel. This gelation was then used for a one-step method to produce an ultralow density porous carbon or silica material. This material was named an aerosol gel because it was made via gelation of particles in the aerosol phase. The carbon and silica aerosol gels had high specific surface area (200--350 sq m2/g for carbon and 300--500 sq m2/g for silica) and an extremely low density (2.5--6.0 mg/cm3), properties similar to conventional aerogels. Key aspects to form a gel from an aerosol are large volume fraction, ca. 10-4 or greater, and small primary particle size, 50 nm or smaller, so that the gel time is fast compared to other characteristic times. Next we report the results of a study of the cluster morphology and kinetics of a dense aggregating aerosol system using the small angle light scattering technique. The soot particles started as individual monomers, ca. 38 nm radius, grew to bigger clusters with time and finally stopped evolving after spanning a network across the whole system volume. This spanning is aerosol gelation. The gelled system showed a hybrid morphology with a lower fractal dimension at length scales of a micron or smaller and a higher fractal dimension at length scales greater than a micron. The study of the kinetics of the aggregating system showed that when the system gelled, the aggregation kernel homogeneity lambda attained a value 0.4 or higher. The magnitude of the aggregation kernel showed an increase with increasing volume fraction. We also used image analysis technique to study the cluster morphology. From the digitized pictures of soot clusters the cluster morphology was determined by two different methods: structure factor and perimeter analysis. We find a hybrid, superaggregate morphology characterized by a fractal dimension of Df ≈ to 1.8 between the monomer size, ca. 50 nm, and 1 mum micron and Df ≈ to 2.6 at larger length scales up to ˜ 10 mum. The superaggregate morphology is a

  9. A quest for super dense aluminium

    NASA Astrophysics Data System (ADS)

    Fiquet, G.; Narayana, C.; Bellin, C.; Shukla, A.; Esteve, I.; Mezouar, N.

    2013-12-01

    The extreme pressure phase diagram of materials is important not only for understanding the interiors of planets or stars, but also for the fundamental understanding of the relation between crystal structure and electronic structure. Structural transitions induced by extreme pressure are governed by the deformation of valence electron charge density which bears the brunt of increasing compression while the relative volume occupied by the nearly incompressible ionic core electrons increases. At extreme pressures common materials are expected to transform into new dense phases with extremely compact atomic arrangements that may also have unusual physical properties. In this report, we present new experiments carried out on aluminium. A simple system like Al is not only important as a benchmark for theory, but can also be used as a standard for pressures in the TPa range and beyond which are targeted at new dynamic compression facilities such as the National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory in the US or Laser Mégajoule (LMJ) in Bordeaux in France. For aluminium, first principle calculations have consistently predicted a phase transition sequence from fcc to hcp and hcp to bcc in a pressure range below 0.5 TPa [Tambe et al., Phys. Rev. B 77, 172102, 2008]. The hcp phase was identified at 217 GPa in a recent experiment [Akahama et al., Phys. Rev. Lett. 96, 45505, 2006] but the detection of the predicted bcc phase has been hampered by the difficulty of routine static high pressure experiments beyond 350 GPa. Here, we report on the overcoming of this obstacle and the detection of all the structural phase transitions predicted in Al by achieving a pressure in excess of 500 GPa in the static regime in a diamond-anvil cell. In particular, using X-ray diffraction at the high-pressure beamline ID27 at the European Synchrotron Radiation Facility (ESRF), we find a bcc super-dense phase of aluminium at a pressure of 380 GPa. In this report

  10. ARA type protograph codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush (Inventor); Abbasfar, Aliazam (Inventor); Jones, Christopher R. (Inventor); Dolinar, Samuel J. (Inventor); Thorpe, Jeremy C. (Inventor); Andrews, Kenneth S. (Inventor); Yao, Kung (Inventor)

    2008-01-01

    An apparatus and method for encoding low-density parity check codes. Together with a repeater, an interleaver and an accumulator, the apparatus comprises a precoder, thus forming accumulate-repeat-accumulate (ARA codes). Protographs representing various types of ARA codes, including AR3A, AR4A and ARJA codes, are described. High performance is obtained when compared to the performance of current repeat-accumulate (RA) or irregular-repeat-accumulate (IRA) codes.

  11. QR Codes 101

    ERIC Educational Resources Information Center

    Crompton, Helen; LaFrance, Jason; van 't Hooft, Mark

    2012-01-01

    A QR (quick-response) code is a two-dimensional scannable code, similar in function to a traditional bar code that one might find on a product at the supermarket. The main difference between the two is that, while a traditional bar code can hold a maximum of only 20 digits, a QR code can hold up to 7,089 characters, so it can contain much more…

  12. Dense heteroclinic tangencies near a Bykov cycle

    NASA Astrophysics Data System (ADS)

    Labouriau, Isabel S.; Rodrigues, Alexandre A. P.

    2015-12-01

    This article presents a mechanism for the coexistence of hyperbolic and non-hyperbolic dynamics arising in a neighbourhood of a Bykov cycle where trajectories turn in opposite directions near the two nodes - we say that the nodes have different chirality. We show that in the set of vector fields defined on a three-dimensional manifold, there is a class where tangencies of the invariant manifolds of two hyperbolic saddle-foci occur densely. The class is defined by the presence of the Bykov cycle, and by a condition on the parameters that determine the linear part of the vector field at the equilibria. This has important consequences: the global dynamics is persistently dominated by heteroclinic tangencies and by Newhouse phenomena, coexisting with hyperbolic dynamics arising from transversality. The coexistence gives rise to linked suspensions of Cantor sets, with hyperbolic and non-hyperbolic dynamics, in contrast with the case where the nodes have the same chirality. We illustrate our theory with an explicit example where tangencies arise in the unfolding of a symmetric vector field on the three-dimensional sphere.

  13. Mach reflection in a warm dense plasma

    SciTech Connect

    Foster, J. M.; Rosen, P. A.; Wilde, B. H.; Hartigan, P.; Perry, T. S.

    2010-11-15

    The phenomenon of irregular shock-wave reflection is of importance in high-temperature gas dynamics, astrophysics, inertial-confinement fusion, and related fields of high-energy-density science. However, most experimental studies of irregular reflection have used supersonic wind tunnels or shock tubes, and few or no data are available for Mach reflection phenomena in the plasma regime. Similarly, analytic studies have often been confined to calorically perfect gases. We report the first direct observation, and numerical modeling, of Mach stem formation for a warm, dense plasma. Two ablatively driven aluminum disks launch oppositely directed, near-spherical shock waves into a cylindrical plastic block. The interaction of these shocks results in the formation of a Mach-ring shock that is diagnosed by x-ray backlighting. The data are modeled using radiation hydrocodes developed by AWE and LANL. The experiments were carried out at the University of Rochester's Omega laser [J. M. Soures, R. L. McCrory, C. P. Verdon et al., Phys. Plasmas 3, 2108 (1996)] and were inspired by modeling [A. M. Khokhlov, P. A. Hoeflich, E. S. Oran et al., Astrophys J. 524, L107 (1999)] of core-collapse supernovae that suggest that in asymmetric supernova explosion significant mass may be ejected in a Mach-ring formation launched by bipolar jets.

  14. A new rheology for dense granular flows

    NASA Astrophysics Data System (ADS)

    Jop, Pierre

    2005-11-01

    Recent experiments and numerical simulations of dry and dense granular flows suggest that a simple rheological description, in terms of a shear rate dependent friction coefficient, may be sufficient to capture the major flow properties [1,2]. In this work we generalize this approach by proposing a tensorial form of this rheology leading to 3D hydrodynamic equations for granular flows. We show that quantitative predictions can be obtained with this model by studying the flow of grains on a pile confined between two lateral walls. In this configuration we have experimentally measured the free surface velocity profile, the flowing thickness for different flow rates and channel widths. The results are compared with numerical simulations of the hydrodynamic model and quantitative agreement is observed. This study strongly supports the relevance of the proposed rheology. 1. F. da Cruz, S. Emam, M. Prochnow, J.-N. Roux and F. Chevoir, cond-mat/ 0503682 (2005)2. G.D.R. Midi, EPJE14 367-371 (2004)

  15. Oblique impact of dense granular sheets

    NASA Astrophysics Data System (ADS)

    Ellowitz, Jake; Guttenberg, Nicholas; Jaeger, Heinrich M.; Nagel, Sidney R.; Zhang, Wendy W.

    2013-11-01

    Motivated by experiments showing impacts of granular jets with non-circular cross sections produce thin ejecta sheets with anisotropic shapes, we study what happens when two sheets containing densely packed, rigid grains traveling at the same speed collide asymmetrically. Discrete particle simulations and a continuum frictional fluid model yield the same steady-state solution of two exit streams emerging from incident streams. When the incident angle Δθ is less than Δθc =120° +/-10° , the exit streams' angles differ from that measured in water sheet experiments. Below Δθc , the exit angles from granular and water sheet impacts agree. This correspondence is surprising because 2D Euler jet impact, the idealization relevant for both situations, is ill posed: a generic Δθ value permits a continuous family of solutions. Our finding that granular and water sheet impacts evolve into the same member of the solution family suggests previous proposals that perturbations such as viscous drag, surface tension or air entrapment select the actual outcome are not correct. Currently at Department of Physics, University of Oregon, Eugene, OR 97403.

  16. Packing frustration in dense confined fluids

    NASA Astrophysics Data System (ADS)

    Nygârd, Kim; Sarman, Sten; Kjellander, Roland

    2014-09-01

    Packing frustration for confined fluids, i.e., the incompatibility between the preferred packing of the fluid particles and the packing constraints imposed by the confining surfaces, is studied for a dense hard-sphere fluid confined between planar hard surfaces at short separations. The detailed mechanism for the frustration is investigated via an analysis of the anisotropic pair distributions of the confined fluid, as obtained from integral equation theory for inhomogeneous fluids at pair correlation level within the anisotropic Percus-Yevick approximation. By examining the mean forces that arise from interparticle collisions around the periphery of each particle in the slit, we calculate the principal components of the mean force for the density profile - each component being the sum of collisional forces on a particle's hemisphere facing either surface. The variations of these components with the slit width give rise to rather intricate changes in the layer structure between the surfaces, but, as shown in this paper, the basis of these variations can be easily understood qualitatively and often also semi-quantitatively. It is found that the ordering of the fluid is in essence governed locally by the packing constraints at each single solid-fluid interface. A simple superposition of forces due to the presence of each surface gives surprisingly good estimates of the density profiles, but there remain nontrivial confinement effects that cannot be explained by superposition, most notably the magnitude of the excess adsorption of particles in the slit relative to bulk.

  17. The lifetime of evaporating dense sprays

    NASA Astrophysics Data System (ADS)

    de Rivas, Alois; Villermaux, Emmanuel

    2015-11-01

    We study the processes by which a set of nearby liquid droplets (a spray) evaporates in a gas phase whose relative humidity (vapor concentration) is controlled at will. A dense spray of micron-sized water droplets is formed in air by a pneumatic atomizer and conveyed through a nozzle in a closed chamber whose vapor concentration has been pre-set to a controlled value. The resulting plume extension depends on the relative humidity of the diluting medium. When the spray plume is straight and laminar, droplets evaporate at its edge where the vapor is saturated, and diffuses through a boundary layer developing around the plume. We quantify the shape and length of the plume as a function of the injecting, vapor diffusion, thermodynamic and environment parameters. For higher injection Reynolds numbers, standard shear instabilities distort the plume into stretched lamellae, thus enhancing the diffusion of vapor from their boundary towards the diluting medium. These lamellae vanish in a finite time depending on the intensity of the stretching, and relative humidity of the environment, with a lifetime diverging close to the equilibrium limit, when the plume develops in an medium saturated in vapor. The dependences are described quantitatively.

  18. Dense colloidal fluids form denser amorphous sediments

    PubMed Central

    Liber, Shir R.; Borohovich, Shai; Butenko, Alexander V.; Schofield, Andrew B.; Sloutskin, Eli

    2013-01-01

    We relate, by simple analytical centrifugation experiments, the density of colloidal fluids with the nature of their randomly packed solid sediments. We demonstrate that the most dilute fluids of colloidal hard spheres form loosely packed sediments, where the volume fraction of the particles approaches in frictional systems the random loose packing limit, φRLP = 0.55. The dense fluids of the same spheres form denser sediments, approaching the so-called random close packing limit, φRCP = 0.64. Our experiments, where particle sedimentation in a centrifuge is sufficiently rapid to avoid crystallization, demonstrate that the density of the sediments varies monotonically with the volume fraction of the initial suspension. We reproduce our experimental data by simple computer simulations, where structural reorganizations are prohibited, such that the rate of sedimentation is irrelevant. This suggests that in colloidal systems, where viscous forces dominate, the structure of randomly close-packed and randomly loose-packed sediments is determined by the well-known structure of the initial fluids of simple hard spheres, provided that the crystallization is fully suppressed. PMID:23530198

  19. Particular Properties of Dense Supernova Matter

    NASA Astrophysics Data System (ADS)

    Takatsuka, T.; Nishizaki, S.; Hiura, J.

    1994-10-01

    Dense supernova matter composed of n, p, e-, e+, νe and bar{ν}e is investigated in detail by solving self-consistently a set of finite-temperature Hartree-Fock equations with an effective nucleon interaction. The effective interaction includes a phenomenological three-nucleon interaction to assure the saturation property of symmetric nuclear matter. Results of thermodynamic quantities and mixing ratios of respective components are analyzed and tabulated for wide region of density (ρ = (1 - 6)ρ0) and temperature (T = (10 - 40) MeV) by choosing the lepton fraction Yl = (0.3, 0.35, 0.4). We discuss particular properties of the matter such as the constancy of composition, the large proton fraction expressed by Yp =~ (2/3)Yl + 0.05 and the stiffened equation of state, and also discuss remarkable features of hot neutron stars at birth such as the fat density profile and the increasing temperature toward the center. It is shown that these features are caused essentially by the effects of neutrino trapping to generate the high and constant lepton fraction and isentropic nature, the effects which are absent in neutron star matter.

  20. New source of dense, cryogenic positron plasmas.

    PubMed

    Jørgensen, L V; Amoretti, M; Bonomi, G; Bowe, P D; Canali, C; Carraro, C; Cesar, C L; Charlton, M; Doser, M; Fontana, A; Fujiwara, M C; Funakoshi, R; Genova, P; Hangst, J S; Hayano, R S; Kellerbauer, A; Lagomarsino, V; Landua, R; Lodi Rizzini, E; Macrì, M; Madsen, N; Mitchard, D; Montagna, P; Rotondi, A; Testera, G; Variola, A; Venturelli, L; van der Werf, D P; Yamazaki, Y

    2005-07-01

    We have developed a new method, based on the ballistic transfer of preaccumulated plasmas, to obtain large and dense positron plasmas in a cryogenic environment. The method involves transferring plasmas emanating from a region with a low magnetic field (0.14 T) and relatively high pressure (10(-9) mbar) into a 15 K Penning-Malmberg trap immersed in a 3 T magnetic field with a base pressure better than 10(-13) mbar. The achieved positron accumulation rate in the high field cryogenic trap is more than one and a half orders of magnitude higher than the previous most efficient UHV compatible scheme. Subsequent stacking resulted in a plasma containing more than 1.2 x 10(9) positrons, which is a factor 4 higher than previously reported. Using a rotating wall electric field, plasmas containing about 20 x 10(6) positrons were compressed to a density of 2.6 x 10(10) cm(-3). This is a factor of 6 improvement over earlier measurements. PMID:16090691

  1. Borehole stability in densely welded tuffs

    SciTech Connect

    Fuenkajorn, K.; Daemen, J.J.K.

    1992-07-01

    The stability of boreholes, or more generally of underground openings (i.e. including shafts, ramps, drifts, tunnels, etc.) at locations where seals or plugs are to be placed is an important consideration in seal design for a repository (Juhlin and Sandstedt, 1989). Borehole instability or borehole breakouts induced by stress redistribution could negate the effectiveness of seals or plugs. Breakout fractures along the wall of repository excavations or exploratory holes could provide a preferential flowpath for groundwater or gaseous radionuclides to bypass the plugs. After plug installation, swelling pressures exerted by a plug could induce radial cracks or could open or widen preexisting cracks in the rock at the bottom of the breakouts where the tangential compressive stresses have been released by the breakout process. The purpose of the work reported here is to determine experimentally the stability of a circular hole in a welded tuff sample subjected to various external boundary loads. Triaxial and biaxial borehole stability tests have been performed on densely welded Apache Leap tuff samples and Topopah Spring tuff samples. The nominal diameter of the test hole is 13.3 or 14.4 mm for triaxial testing, and 25.4 mm for biaxial testing. The borehole axis is parallel to one of the principal stress axes. The boreholes are drilled through the samples prior to applying external boundary loads. The boundary loads are progressively increased until breakouts occur or until the maximum load capacity of the loading system has been reached. 74 refs.

  2. Proton Stopping Power in Warm Dense Hydrogen

    NASA Astrophysics Data System (ADS)

    Higginson, Drew; Chen, Sophia; Atzeni, Stefano; Gauthier, Maxence; Mangia, Feliciana; Marquès, Jean-Raphaël; Riquier, Raphaël; Fuchs, Julien

    2013-10-01

    Warm dense matter (WDM) research is fundamental to many fields of physics including fusion sciences, and astrophysical phenomena. In the WDM regime, particle stopping-power differs significantly from cold matter and ideal plasma due to free electron contributions, plasma correlation effects and electron degeneracy. The creation of WDM with temporal duration consistent with the particles probes is difficult to achieve experimentally. The short-pulse laser platform allows for the production of WDM along with relatively short bunches of protons compatible of such measurements, however, until recently, the intrinsic broadband proton spectrum was not well suited to investigate the stopping power directly. This difficulty has been overcome using a novel magnetic particle selector (ΔE/E = 10%) to select protons (in the range 100-1000 keV) as demonstrated with the ELFIE laser in LULI, France. These protons bunches probe high-density (5 × 1020 cm-3) gases (H, He) heated by a nanosecond laser to reach estimated temperatures above 100 eV. Measurement of the proton energy loss within the heated gas allows the stopping power to be determined quantitatively. The experimental results in cold matter are compared to preexisting models to give credibility to the measurement technique. The results from heated matter show that the stopping power of 450 keV protons is dramatically reduced within heated hydrogen plasma.

  3. Order and instabilities in dense bacterial colonies

    NASA Astrophysics Data System (ADS)

    Tsimring, Lev

    2012-02-01

    The structure of cell colonies is governed by the interplay of many physical and biological factors, ranging from properties of surrounding media to cell-cell communication and gene expression in individual cells. The biomechanical interactions arising from the growth and division of individual cells in confined environments are ubiquitous, yet little work has focused on this fundamental aspect of colony formation. By combining experimental observations of growing monolayers of non-motile strain of bacteria Escherichia coli in a shallow microfluidic chemostat with discrete-element simulations and continuous theory, we demonstrate that expansion of a dense colony leads to rapid orientational alignment of rod-like cells. However, in larger colonies, anisotropic compression may lead to buckling instability which breaks perfect nematic order. Furthermore, we found that in shallow cavities feedback between cell growth and mobility in a confined environment leads to a novel cell streaming instability. Joint work with W. Mather, D. Volfson, O. Mondrag'on-Palomino, T. Danino, S. Cookson, and J. Hasty (UCSD) and D. Boyer, S. Orozco-Fuentes (UNAM, Mexico).

  4. Droplet formation and scaling in dense suspensions

    PubMed Central

    Miskin, Marc Z.; Jaeger, Heinrich M.

    2012-01-01

    When a dense suspension is squeezed from a nozzle, droplet detachment can occur similar to that of pure liquids. While in pure liquids the process of droplet detachment is well characterized through self-similar profiles and known scaling laws, we show here the simple presence of particles causes suspensions to break up in a new fashion. Using high-speed imaging, we find that detachment of a suspension drop is described by a power law; specifically we find the neck minimum radius, rm, scales like near breakup at time τ = 0. We demonstrate data collapse in a variety of particle/liquid combinations, packing fractions, solvent viscosities, and initial conditions. We argue that this scaling is a consequence of particles deforming the neck surface, thereby creating a pressure that is balanced by inertia, and show how it emerges from topological constraints that relate particle configurations with macroscopic Gaussian curvature. This new type of scaling, uniquely enforced by geometry and regulated by the particles, displays memory of its initial conditions, fails to be self-similar, and has implications for the pressure given at generic suspension interfaces. PMID:22392979

  5. Activated Dynamics in Dense Model Nanocomposites

    NASA Astrophysics Data System (ADS)

    Xie, Shijie; Schweizer, Kenneth

    The nonlinear Langevin equation approach is applied to investigate the ensemble-averaged activated dynamics of small molecule liquids (or disconnected segments in a polymer melt) in dense nanocomposites under model isobaric conditions where the spherical nanoparticles are dynamically fixed. Fully thermalized and quenched-replica integral equation theory methods are employed to investigate the influence on matrix dynamics of the equilibrium and nonequilibrium nanocomposite structure, respectively. In equilibrium, the miscibility window can be narrow due to depletion and bridging attraction induced phase separation which limits the study of activated dynamics to regimes where the barriers are relatively low. In contrast, by using replica integral equation theory, macroscopic demixing is suppressed, and the addition of nanoparticles can induce much slower activated matrix dynamics which can be studied over a wide range of pure liquid alpha relaxation times, interfacial attraction strengths and ranges, particle sizes and loadings, and mixture microstructures. Numerical results for the mean activated relaxation time, transient localization length, matrix elasticity and kinetic vitrification in the nanocomposite will be presented.

  6. Dense Hypervelocity Plasma Jets for Fusion Applications

    NASA Astrophysics Data System (ADS)

    Witherspoon, F. Douglas; Thio, Y. C. Francis

    2005-10-01

    High velocity dense plasma jets are being developed for a variety of fusion applications, including refueling, disruption mitigation, High Energy Density Plasmas, magnetized target/magneto-inertial fusion, injection of angular momentum into centrifugally confined mirrors, and others. The technical goal is to accelerate plasma blobs of density >10^17 cm-3 and total mass >100 micrograms to velocities >200 km/s. The approach utilizes symmetrical injection of very high density plasma into a coaxial EM accelerator having a tailored cross-section that prevents formation of the blow-by instability. AFRL MACH2 modeling identified 2 electrode configurations that produce the desired plasma jet parameters. The injected plasma is generated by up to 64 radially oriented capillary discharges arranged uniformly around the circumference of an angled annular injection section. Initial experimental results are presented in which 8 capillaries are fired in parallel with jitter of ˜100 ns. Current focus is on higher voltage operation to reduce jitter to a few 10's of ns, and development of a suite of optical and spectroscopic plasma diagnostics.

  7. Thermochemistry of dense hydrous magnesium silicates

    NASA Technical Reports Server (NTRS)

    Bose, Kunal; Burnley, Pamela; Navrotsky, Alexandra

    1994-01-01

    Recent experimental investigations under mantle conditions have identified a suite of dense hydrous magnesium silicate (DHMS) phases that could be conduits to transport water to at least the 660 km discontinuity via mature, relatively cold, subducting slabs. Water released from successive dehydration of these phases during subduction could be responsible for deep focus earthquakes, mantle metasomatism and a host of other physico-chemical processes central to our understanding of the earth's deep interior. In order to construct a thermodynamic data base that can delineate and predict the stability ranges for DHMS phases, reliable thermochemical and thermophysical data are required. One of the major obstacles in calorimetric studies of phases synthesized under high pressure conditions has been limitation due to the small (less than 5 mg) sample mass. Our refinement of calorimeter techniques now allow precise determination of enthalpies of solution of less than 5 mg samples of hydrous magnesium silicates. For example, high temperature solution calorimetry of natural talc (Mg(0.99) Fe(0.01)Si4O10(OH)2), periclase (MgO) and quartz (SiO2) yield enthalpies of drop solution at 1044 K to be 592.2 (2.2), 52.01 (0.12) and 45.76 (0.4) kJ/mol respectively. The corresponding enthalpy of formation from oxides at 298 K for talc is minus 5908.2 kJ/mol agreeing within 0.1 percent to literature values.

  8. Superconductivity in dense carbon-based materials

    NASA Astrophysics Data System (ADS)

    Lu, Siyu; Liu, Hanyu; Naumov, Ivan I.; Meng, Sheng; Li, Yinwei; Tse, John S.; Yang, Bai; Hemley, Russell J.

    2016-03-01

    Guided by a simple strategy in search of new superconducting materials, we predict that high-temperature superconductivity can be realized in classes of high-density materials having strong sp3 chemical bonding and high lattice symmetry. We examine in detail sodalite carbon frameworks doped with simple metals such as Li, Na, and Al. Though such materials share some common features with doped diamond, their doping level is not limited, and the density of states at the Fermi level in them can be as high as that in the renowned Mg B2 . Together with other factors, this boosts the superconducting temperature (Tc) in the materials investigated to higher levels compared to doped diamond. For example, the Tc of sodalitelike Na C6 is predicted to be above 100 K. This phase and a series of other sodalite-based superconductors are predicted to be metastable phases but are dynamically stable. Owing to the rigid carbon framework of these and related dense carbon materials, these doped sodalite-based structures could be recoverable as potentially useful superconductors.

  9. Elemental nitrogen partitioning in dense interstellar clouds

    PubMed Central

    Daranlot, Julien; Hincelin, Ugo; Bergeat, Astrid; Costes, Michel; Loison, Jean-Christophe; Wakelam, Valentine; Hickson, Kevin M.

    2012-01-01

    Many chemical models of dense interstellar clouds predict that the majority of gas-phase elemental nitrogen should be present as N2, with an abundance approximately five orders of magnitude less than that of hydrogen. As a homonuclear diatomic molecule, N2 is difficult to detect spectroscopically through infrared or millimeter-wavelength transitions. Therefore, its abundance is often inferred indirectly through its reaction product N2H+. Two main formation mechanisms, each involving two radical-radical reactions, are the source of N2 in such environments. Here we report measurements of the low temperature rate constants for one of these processes, the N + CN reaction, down to 56 K. The measured rate constants for this reaction, and those recently determined for two other reactions implicated in N2 formation, are tested using a gas-grain model employing a critically evaluated chemical network. We show that the amount of interstellar nitrogen present as N2 depends on the competition between its gas-phase formation and the depletion of atomic nitrogen onto grains. As the reactions controlling N2 formation are inefficient, we argue that N2 does not represent the main reservoir species for interstellar nitrogen. Instead, elevated abundances of more labile forms of nitrogen such as NH3 should be present on interstellar ices, promoting the eventual formation of nitrogen-bearing organic molecules. PMID:22689957

  10. Neutrino ground state in a dense star

    NASA Astrophysics Data System (ADS)

    Kiers, Ken; Tytgat, Michel H. G.

    1998-05-01

    It has recently been argued that long range forces due to the exchange of massless neutrinos give rise to a very large self-energy in a dense, finite-ranged, weakly charged medium. Such an effect, if real, would destabilize a neutron star. To address this issue we have studied the related problem of a massless neutrino field in the presence of an external, static electroweak potential of finite range. To be precise, we have computed to one loop the exact vacuum energy for the case of a spherical square well potential of depth α and radius R. For small wells, the vacuum energy is reliably determined by a perturbative expansion in the external potential. For large wells, however, the perturbative expansion breaks down. A manifestation of this breakdown is that the vacuum carries a non-zero neutrino charge. The energy and neutrino charge of the ground state are, to a good approximation for large wells, those of a neutrino condensate with chemical potential μ=α. Our results demonstrate explicitly that long-range forces due to the exchange of massless neutrinos do not threaten the stability of neutron stars.

  11. Linear dependence of surface expansion speed on initial plasma temperature in warm dense matter

    PubMed Central

    Bang, W.; Albright, B. J.; Bradley, P. A.; Vold, E. L.; Boettger, J. C.; Fernández, J. C.

    2016-01-01

    Recent progress in laser-driven quasi-monoenergetic ion beams enabled the production of uniformly heated warm dense matter. Matter heated rapidly with this technique is under extreme temperatures and pressures, and promptly expands outward. While the expansion speed of an ideal plasma is known to have a square-root dependence on temperature, computer simulations presented here show a linear dependence of expansion speed on initial plasma temperature in the warm dense matter regime. The expansion of uniformly heated 1–100 eV solid density gold foils was modeled with the RAGE radiation-hydrodynamics code, and the average surface expansion speed was found to increase linearly with temperature. The origin of this linear dependence is explained by comparing predictions from the SESAME equation-of-state tables with those from the ideal gas equation-of-state. These simulations offer useful insight into the expansion of warm dense matter and motivate the application of optical shadowgraphy for temperature measurement. PMID:27405664

  12. Linear dependence of surface expansion speed on initial plasma temperature in warm dense matter

    NASA Astrophysics Data System (ADS)

    Bang, W.; Albright, B. J.; Bradley, P. A.; Vold, E. L.; Boettger, J. C.; Fernández, J. C.

    2016-07-01

    Recent progress in laser-driven quasi-monoenergetic ion beams enabled the production of uniformly heated warm dense matter. Matter heated rapidly with this technique is under extreme temperatures and pressures, and promptly expands outward. While the expansion speed of an ideal plasma is known to have a square-root dependence on temperature, computer simulations presented here show a linear dependence of expansion speed on initial plasma temperature in the warm dense matter regime. The expansion of uniformly heated 1–100 eV solid density gold foils was modeled with the RAGE radiation-hydrodynamics code, and the average surface expansion speed was found to increase linearly with temperature. The origin of this linear dependence is explained by comparing predictions from the SESAME equation-of-state tables with those from the ideal gas equation-of-state. These simulations offer useful insight into the expansion of warm dense matter and motivate the application of optical shadowgraphy for temperature measurement.

  13. A comparison of dense region detectors for image search and fine-grained classification.

    PubMed

    Iscen, Ahmet; Tolias, Giorgos; Gosselin, Philippe-Henri; Jegou, Herve

    2015-08-01

    We consider a pipeline for image classification or search based on coding approaches like bag of words or Fisher vectors. In this context, the most common approach is to extract the image patches regularly in a dense manner on several scales. This paper proposes and evaluates alternative choices to extract patches densely. Beyond simple strategies derived from regular interest region detectors, we propose approaches based on superpixels, edges, and a bank of Zernike filters used as detectors. The different approaches are evaluated on recent image retrieval and fine-grained classification benchmarks. Our results show that the regular dense detector is outperformed by other methods in most situations, leading us to improve the state-of-the-art in comparable setups on standard retrieval and fined-grained benchmarks. As a byproduct of our study, we show that existing methods for blob and superpixel extraction achieve high accuracy if the patches are extracted along the edges and not around the detected regions. PMID:25879947

  14. Linear dependence of surface expansion speed on initial plasma temperature in warm dense matter

    DOE PAGESBeta

    Bang, Woosuk; Albright, Brian James; Bradley, Paul Andrew; Vold, Erik Lehman; Boettger, Jonathan Carl; Fernández, Juan Carlos

    2016-07-12

    Recent progress in laser-driven quasi-monoenergetic ion beams enabled the production of uniformly heated warm dense matter. Matter heated rapidly with this technique is under extreme temperatures and pressures, and promptly expands outward. While the expansion speed of an ideal plasma is known to have a square-root dependence on temperature, computer simulations presented here show a linear dependence of expansion speed on initial plasma temperature in the warm dense matter regime. The expansion of uniformly heated 1–100 eV solid density gold foils was modeled with the RAGE radiation-hydrodynamics code, and the average surface expansion speed was found to increase linearly withmore » temperature. The origin of this linear dependence is explained by comparing predictions from the SESAME equation-of-state tables with those from the ideal gas equation-of-state. In conclusion, these simulations offer useful insight into the expansion of warm dense matter and motivate the application of optical shadowgraphy for temperature measurement.« less

  15. Linear dependence of surface expansion speed on initial plasma temperature in warm dense matter.

    PubMed

    Bang, W; Albright, B J; Bradley, P A; Vold, E L; Boettger, J C; Fernández, J C

    2016-01-01

    Recent progress in laser-driven quasi-monoenergetic ion beams enabled the production of uniformly heated warm dense matter. Matter heated rapidly with this technique is under extreme temperatures and pressures, and promptly expands outward. While the expansion speed of an ideal plasma is known to have a square-root dependence on temperature, computer simulations presented here show a linear dependence of expansion speed on initial plasma temperature in the warm dense matter regime. The expansion of uniformly heated 1-100 eV solid density gold foils was modeled with the RAGE radiation-hydrodynamics code, and the average surface expansion speed was found to increase linearly with temperature. The origin of this linear dependence is explained by comparing predictions from the SESAME equation-of-state tables with those from the ideal gas equation-of-state. These simulations offer useful insight into the expansion of warm dense matter and motivate the application of optical shadowgraphy for temperature measurement. PMID:27405664

  16. Evolutionary models of rotating dense stellar systems: challenges in software and hardware

    NASA Astrophysics Data System (ADS)

    Fiestas, Jose

    2016-02-01

    We present evolutionary models of rotating self-gravitating systems (e.g. globular clusters, galaxy cores). These models are characterized by the presence of initial axisymmetry due to rotation. Central black hole seeds are alternatively included in our models, and black hole growth due to consumption of stellar matter is simulated until the central potential dominates the kinematics in the core. Goal is to study the long-term evolution (~ Gyr) of relaxed dense stellar systems, which deviate from spherical symmetry, their morphology and final kinematics. With this purpose, we developed a 2D Fokker-Planck analytical code, which results we confirm by detailed N-Body techniques, applying a high performance code, developed for GPU machines. We compare our models to available observations of galactic rotating globular clusters, and conclude that initial rotation modifies significantly the shape and lifetime of these systems, and can not be neglected in studying the evolution of globular clusters, and the galaxy itself.

  17. Terrestrial atmospheric effects induced by counterstreaming dense interstellar cloud material

    NASA Astrophysics Data System (ADS)

    Yeghikyan, A.; Fahr, H.

    The Solar System during its life has encountered more than 10 times with dense interstellar clouds with particle concentrations about 10(8)-10(9) m(-3) and more suppressing the heliopause to dimensions smaller than 1 AU and bringing the Earth in immediate contact with the interstellar matter. For cloud's concentration greater than of 10(8) m(-3), the flow material at the Earth, completely shielded from solar wind protons would be only subject to solar photoionization processes. Previously published results were limited to consideration of processes outside of the accretion radius and have not been taken into account the photoionization. We have developed a 2D-2-fluid gasdynamical numerical code to describe the behavior of the incoming neutral matter near the Earth, taken into account both the photoionization and the gravity of the Sun. Increased neutral hydrogen fluxes ranging from 10(13) to 10(16) m(-2)s(-1) would cause an alteration of the terrestrial atmosphere. During immersion in the cloud the total incident flux of neutral hydrogen onto the terrestrial atmosphere in the steady state would be balanced by the upward escape flux of hydrogen and the downward flux of water, which is the product of hydrogen-oxygen chemistry via even-odd reaction schemes. In that case hydrogen acts as a catalyst for the destruction of oxygen atoms and causes the ozone concentration to diminish pronouncedly above 50 km from a factor of 1.5 at the stratopause to about a factor of 1000 and more at the mesopause. Thus, depending on the encounter parameters the large mixing ratio of hydrogen decreases the ozon concentration in the mesosphere and triggers an ice age of relatively long duration.

  18. Sticky Particles: Modeling Rigid Aggregates in Dense Planetary Rings

    NASA Astrophysics Data System (ADS)

    Perrine, Randall P.; Richardson, D. C.; Scheeres, D. J.

    2008-09-01

    We present progress on our study of planetary ring dynamics. We use local N-body simulations to examine small patches of dense rings in which self-gravity and mutual collisions dominate the dynamics of the ring material. We use the numerical code pkdgrav to model the motions of 105-7 ring particles, using a sliding patch model with modified periodic boundary conditions. The exact nature of planetary ring particles is not well understood. If covered in a frost-like layer, such irregular surfaces may allow for weak cohesion between colliding particles. Thus we have recently added new functionality to our model, allowing "sticky particles” to lock into rigid aggregates while in a rotating reference frame. This capability allows particles to adhere to one another, forming irregularly shaped aggregates that move as rigid bodies. (The bonds between particles can subsequently break, given sufficient stress.) These aggregates have greater strength than gravitationally bound "rubble piles,” and are thus able to grow larger and survive longer under similar stresses. This new functionality allows us to explore planetary ring properties and dynamics in a new way, by self-consistently forming (and destroying) non-spherical aggregates and moonlets via cohesive forces, while in a rotating frame, subjected to planetary tides. (We are not aware of any similar implementations in other existing models.) These improvements allow us to study the many effects that particle aggregation may have on the rings, such as overall ring structure; wake formation; equilibrium properties of non-spherical particles, like pitch angle, orientation, shape, size distribution, and spin; and the surface properties of the ring material. We present test cases and the latest results from this new model. This work is supported by a NASA Earth and Space Science Fellowship.

  19. Frontiers the Physics of Dense Matter for Neutron Stars

    NASA Astrophysics Data System (ADS)

    Steiner, Andrew W.

    2016-04-01

    Neutron stars are an excellent laboratory for nuclear physics. They probe the nucleon-nucleon interaction, the structure of nuclei, and the nature of dense QCD in ways which complement current experimental efforts. This article very briefly summarizes some of the current frontiers in neutron stars and dense matter with an emphasis on how our understanding might be improved in the near future.

  20. The chemistry of phosphorus in dense interstellar clouds

    NASA Technical Reports Server (NTRS)

    Thorne, L. R.; Anicich, V. G.; Prasad, S. S.; Huntress, W. T., Jr.

    1984-01-01

    Laboratory experiments show that the ion-molecule chemistry of phosphorus is significantly different from that of nitrogen in dense interstellar clouds. The PH3 molecule is not readily formed by gas-phase, ion-molecule reactions in these regions. Laboratory results used in a simple kinetic model indicate that the most abundant molecule containing phosphorus in dense clouds is PO.

  1. Propagation of neutrinos in hot and dense media

    NASA Astrophysics Data System (ADS)

    Masood, Samina

    2016-03-01

    We study the propagation of neutrinos in hot and dense media of stellar systems as well as in the very early universe. Our emphasis is on the study of the basic properties of neutrinos with tiny mass and their interactions with the hot and dense media. We also discuss the relevance of our results to astrophysics and cosmology.

  2. Mining connected global and local dense subgraphs for bigdata

    NASA Astrophysics Data System (ADS)

    Wu, Bo; Shen, Haiying

    2016-01-01

    The problem of discovering connected dense subgraphs of natural graphs is important in data analysis. Discovering dense subgraphs that do not contain denser subgraphs or are not contained in denser subgraphs (called significant dense subgraphs) is also critical for wide-ranging applications. In spite of many works on discovering dense subgraphs, there are no algorithms that can guarantee the connectivity of the returned subgraphs or discover significant dense subgraphs. Hence, in this paper, we define two subgraph discovery problems to discover connected and significant dense subgraphs, propose polynomial-time algorithms and theoretically prove their validity. We also propose an algorithm to further improve the time and space efficiency of our basic algorithm for discovering significant dense subgraphs in big data by taking advantage of the unique features of large natural graphs. In the experiments, we use massive natural graphs to evaluate our algorithms in comparison with previous algorithms. The experimental results show the effectiveness of our algorithms for the two problems and their efficiency. This work is also the first that reveals the physical significance of significant dense subgraphs in natural graphs from different domains.

  3. Dense fibrillar collagen is a master activator of invadopodia

    PubMed Central

    Artym, Vira V.

    2016-01-01

    ABSTRACT Tumor stroma is characterized by abnormal accumulation of dense fibrillar collagen, which promotes tumor progression and metastasis. However, the effect of desmoplastic collagen on cells has been unclear. Our recent findings demonstrate that dense fibrillar collagen activates a novel phosphosignaling mechanism for robust induction of invadopodia in tumor cells and normal fibroblasts. PMID:27314068

  4. Two Rab2 interactors regulate dense-core vesicle maturation.

    PubMed

    Ailion, Michael; Hannemann, Mandy; Dalton, Susan; Pappas, Andrea; Watanabe, Shigeki; Hegermann, Jan; Liu, Qiang; Han, Hsiao-Fen; Gu, Mingyu; Goulding, Morgan Q; Sasidharan, Nikhil; Schuske, Kim; Hullett, Patrick; Eimer, Stefan; Jorgensen, Erik M

    2014-04-01

    Peptide neuromodulators are released from a unique organelle: the dense-core vesicle. Dense-core vesicles are generated at the trans-Golgi and then sort cargo during maturation before being secreted. To identify proteins that act in this pathway, we performed a genetic screen in Caenorhabditis elegans for mutants defective in dense-core vesicle function. We identified two conserved Rab2-binding proteins: RUND-1, a RUN domain protein, and CCCP-1, a coiled-coil protein. RUND-1 and CCCP-1 colocalize with RAB-2 at the Golgi, and rab-2, rund-1, and cccp-1 mutants have similar defects in sorting soluble and transmembrane dense-core vesicle cargos. RUND-1 also interacts with the Rab2 GAP protein TBC-8 and the BAR domain protein RIC-19, a RAB-2 effector. In summary, a pathway of conserved proteins controls the maturation of dense-core vesicles at the trans-Golgi network. PMID:24698274

  5. Four-faceted nanowires generated from densely-packed TiO2 rutile surfaces: Ab initio calculations

    NASA Astrophysics Data System (ADS)

    Evarestov, R. A.; Zhukovskii, Yu. F.

    2013-02-01

    Two-dimensional (2D) slabs and monoperiodic (1D) nanowires orthogonal to the slab surface of rutile-based TiO2 structure terminated by densely-packed surfaces and facets, respectively, have been simulated in the current study. The procedure of structural generation of nanowires (NWs) from titania slabs (2D → 1D) is described. We have simulated: (i) (110), (100), (101) and (001) slabs of different thicknesses as well as (ii) [001]- and [110]-oriented nanowires of different diameters terminated by either four types of related {110} facets or alternating {11¯0} and {001} facets, respectively. Nanowires have been described using both the Ti atom-centered rotation axes as well as the hollow site-centered axes passing through the interstitial sites between the Ti and O atoms closest to the axes. For simulations on TiO2 slabs and NWs, we have performed large-scale ab initio Density Functional Theory (DFT) and hybrid DFT-Hartree Fock (DFT-HF) calculations with the total geometry optimization within the Generalized Gradient Approximation (GGA) in the form of the Perdew-Becke-Ernzenhof exchange-correlation functionals (PBE and PBE0, respectively), using the formalism of linear combination of localized atomic functions (LCAO) implemented in CRYSTAL09 code. Both structural and electronic properties of enumerated rutile-based titania slabs and nanowires have been calculated. According to the results of our surface energy calculations, the most stable rutile-based titania slab is terminated by (110) surfaces whereas the energetically favorable [001]-oriented NWs are also terminated by {110} facets only, thus confirming results of previous studies.

  6. Statistical mechanics of error-correcting codes

    NASA Astrophysics Data System (ADS)

    Kabashima, Y.; Saad, D.

    1999-01-01

    We investigate the performance of error-correcting codes, where the code word comprises products of K bits selected from the original message and decoding is carried out utilizing a connectivity tensor with C connections per index. Shannon's bound for the channel capacity is recovered for large K and zero temperature when the code rate K/C is finite. Close to optimal error-correcting capability is obtained for finite K and C. We examine the finite-temperature case to assess the use of simulated annealing for decoding and extend the analysis to accommodate other types of noisy channels.

  7. [Quality of coding in acute inpatient care].

    PubMed

    Stausberg, J

    2007-08-01

    Routine data in the electronic patient record are frequently used for secondary purposes. Core elements of the electronic patient record are diagnoses and procedures, coded with the mandatory classifications. Despite the important role of routine data for reimbursement, quality management and health care statistics, there is currently no systematic analysis of coding quality in Germany. Respective concepts and investigations share the difficulty to decide what's right and what's wrong, being at the end of the long process of medical decision making. Therefore, a relevant amount of disagreement has to be accepted. In case of the principal diagnosis, this could be the fact in half of the patients. Plausibility of coding looks much better. After optimization time in hospitals, regular and complete coding can be expected. Whether coding matches reality, as a prerequisite for further use of the data in medicine and health politics, should be investigated in controlled trials in the future. PMID:17676418

  8. Optimization of Heat Exchangers

    SciTech Connect

    Ivan Catton

    2010-10-01

    The objective of this research is to develop tools to design and optimize heat exchangers (HE) and compact heat exchangers (CHE) for intermediate loop heat transport systems found in the very high temperature reator (VHTR) and other Generation IV designs by addressing heat transfer surface augmentation and conjugate modeling. To optimize heat exchanger, a fast running model must be created that will allow for multiple designs to be compared quickly. To model a heat exchanger, volume averaging theory, VAT, is used. VAT allows for the conservation of mass, momentum and energy to be solved for point by point in a 3 dimensional computer model of a heat exchanger. The end product of this project is a computer code that can predict an optimal configuration for a heat exchanger given only a few constraints (input fluids, size, cost, etc.). As VAT computer code can be used to model characteristics )pumping power, temperatures, and cost) of heat exchangers more quickly than traditional CFD or experiment, optimization of every geometric parameter simultaneously can be made. Using design of experiment, DOE and genetric algorithms, GE, to optimize the results of the computer code will improve heat exchanger disign.

  9. Asymmetric quantum convolutional codes

    NASA Astrophysics Data System (ADS)

    La Guardia, Giuliano G.

    2016-01-01

    In this paper, we construct the first families of asymmetric quantum convolutional codes (AQCCs). These new AQCCs are constructed by means of the CSS-type construction applied to suitable families of classical convolutional codes, which are also constructed here. The new codes have non-catastrophic generator matrices, and they have great asymmetry. Since our constructions are performed algebraically, i.e. we develop general algebraic methods and properties to perform the constructions, it is possible to derive several families of such codes and not only codes with specific parameters. Additionally, several different types of such codes are obtained.

  10. Dust charging in the dense Enceladus torus

    NASA Astrophysics Data System (ADS)

    Yaroshenko, Victoria; Lühr, Hermann; Morfill, Gregor

    2013-04-01

    The key parameter of the dust-plasma interactions is the charge carried by a dust particle. The grain electrostatic potential is usually calculated from the so called orbit-motion limited (OML) model [1]. It is valid for a single particle immersed into collisionless plasmas with Maxwellian electron and ion distributions. Apparently, such a parameter regime cannot be directly applied to the conditions relevant for the Enceladus dense neutral torus and plume, where the plasma is multispecies and multistreaming, the dust density is high, sometimes even exceeding the plasma number density. We have examined several new factors which can significantly affect the grain charging in the dust loaded plasma of the Enceladus torus and in the plume region and which, to our knowledge, have not been investigated up to now for such plasma environments. These include: (a) influence of the multispecies plasma composition, namely the presence of two electron populations with electron temperatures ranging from a few eV up to a hundred eV [2], a few ion species (e.g. corotating water group ions, and protons, characterized by different kinetic temperatures), as well as cold nonthermalized new-born water group ions which move with Kepler velocity [3]; (b) effect of the ion-neutral collisions on the dust charging in the dense Enceladus torus and in the plume; (c) effect of high dust density, when a grain cannot be considered as an isolated particle any more (especially relevant for the plume region, where the average negative dust charge density according to Cassini measurements is of the order or even exceeds the plasma number density [4,5]). It turns out that in this case, the electrostatic potential and respective dust charge cannot be deduced from the initial OML formalism and there is a need to incorporate the effect of dust density into plasma fluxes flowing to the grain surface to calculate the grain equilibrium charge; (e) since the dust in the planetary rings comes in a wide

  11. SUPPORTED DENSE CERAMIC MEMBRANES FOR OXYGEN SEPARATION

    SciTech Connect

    Timothy L. Ward

    2003-03-01

    This project addresses the need for reliable fabrication methods of supported thin/thick dense ceramic membranes for oxygen separation. Some ceramic materials that possess mixed conductivity (electronic and ionic) at high temperature have the potential to permeate oxygen with perfect selectivity, making them very attractive for oxygen separation and membrane reactor applications. In order to maximize permeation rates at the lowest possible temperatures, it is desirable to minimize diffusional limitations within the ceramic by reducing the thickness of the ceramic membrane, preferably to thicknesses of 10 {micro}m or thinner. It has proven to be very challenging to reliably fabricate dense, defect-free ceramic membrane layers of such thickness. In this project we are investigating the use of ultrafine SrCo{sub 0.5}FeO{sub x} (SCFO) powders produced by aerosol pyrolysis to fabricate such supported membranes. SrCo{sub 0.5}FeO{sub x} is a ceramic composition that has been shown to have desirable oxygen permeability, as well as good chemical stability in the reducing environments that are encountered in some important applications. Our approach is to use a doctor blade procedure to deposit pastes prepared from the aerosol-derived SCFO powders onto porous SCFO supports. We have previously shown that membrane layers deposited from the aerosol powders can be sintered to high density without densification of the underlying support. However, these membrane layers contained large-scale cracks and open areas, making them unacceptable for membrane purposes. In the past year, we have refined the paste formulations based on guidance from the ceramic tape casting literature. We have identified a multicomponent organic formulation utilizing castor oil as dispersant in a solvent of mineral spirits and isopropanol. Other additives were polyvinylbutyral as binder and dibutylphthalate as plasticizer. The nonaqueous formulation has superior wetting properties with the powder, and

  12. Experimentally validated 3-D simulation of shock waves generated by dense explosives in confined complex geometries.

    PubMed

    Rigas, Fotis; Sklavounos, Spyros

    2005-05-20

    Accidental blast wave generation and propagation in the surroundings poses severe threats for people and property. The prediction of overpressure maxima and its change with time at specified distances can lead to useful conclusions in quantitative risk analysis applications. In this paper, the use of a computational fluid dynamics (CFD) code CFX-5.6 on dense explosive detonation events is described. The work deals with the three-dimensional simulation of overpressure wave propagation generated by the detonation of a dense explosive within a small-scale branched tunnel. It also aids at validating the code against published experimental data as well as to study the way that the resulting shock wave propagates in a confined space configuration. Predicted overpressure histories were plotted and compared versus experimental measurements showing a reasonably good agreement. Overpressure maxima and corresponding times were found close to the measured ones confirming that CFDs may constitute a useful tool in explosion hazard assessment procedures. Moreover, it was found that blast wave propagates preserving supersonic speed along the tunnel accompanied by high overpressure levels, and indicating that space confinement favors the formation and maintenance of a shock rather than a weak pressure wave. PMID:15885402

  13. Dense Molecular Cores Being Externally Heated

    NASA Astrophysics Data System (ADS)

    Kim, Gwanjeong; Lee, Chang Won; Gopinathan, Maheswar; Jeong, Woong-Seob; Kim, Mi-Ryang

    2016-06-01

    We present results of our study of eight dense cores, previously classified as starless, using infrared (3–160 μm) imaging observations with the AKARI telescope and molecular line (HCN and N2H+) mapping observations with the KVN telescope. Combining our results with the archival IR to millimeter continuum data, we examined the starless nature of these eight cores. Two of the eight cores are found to harbor faint protostars having luminosities of ∼0.3–4.4 L ⊙. The other six cores are found to remain starless and probably are in a dynamically transitional state. The temperature maps produced using multi-wavelength images show an enhancement of about 3–6 K toward the outer boundary of these cores, suggesting that they are most likely being heated externally by nearby stars and/or interstellar radiation fields. Large virial parameters and an overdominance of red asymmetric line profiles over the cores may indicate that the cores are set into either an expansion or an oscillatory motion, probably due to the external heating. Most of the starless cores show a coreshine effect due to the scattering of light by the micron-sized dust grains. This may imply that the age of the cores is of the order of ∼105 years, which is consistent with the timescale required for the cores to evolve into an oscillatory stage due to external perturbation. Our observational results support the idea that the external feedback from nearby stars and/or interstellar radiation fields may play an important role in the dynamical evolution of the cores.

  14. The chemistry of dense interstellar clouds

    NASA Technical Reports Server (NTRS)

    Irvine, W. M.

    1991-01-01

    The basic theme of this program is the study of molecular complexity and evolution in interstellar and circumstellar clouds incorporating the biogenic elements. Recent results include the identification of a new astronomical carbon-chain molecule, C4Si. This species was detected in the envelope expelled from the evolved star IRC+10216 in observations at the Nobeyama Radio Observatory in Japan. C4Si is the carrier of six unidentified lines which had previously been observed. This detection reveals the existence of a new series of carbon-chain molecules, C sub n Si (n equals 1, 2, 4). Such molecules may well be formed from the reaction of Si(+) with acetylene and acetylene derivatives. Other recent research has concentrated on the chemical composition of the cold, dark interstellar clouds, the nearest dense molecular clouds to the solar system. Such regions have very low kinetic temperatures, on the order of 10 K, and are known to be formation sites for solar-type stars. We have recently identified for the first time in such regions the species of H2S, NO, HCOOH (formic acid). The H2S abundance appears to exceed that predicted by gas-phase models of ion-molecule chemistry, perhaps suggesting the importance of synthesis on grain surfaces. Additional observations in dark clouds have studied the ratio of ortho- to para-thioformaldehyde. Since this ratio is expected to be unaffected by both radiative and ordinary collisional processes in the cloud, it may well reflect the formation conditions for this molecule. The ratio is observed to depart from that expected under conditions of chemical equilibrium at formation, perhaps reflecting efficient interchange between cold dust grains in the gas phase.

  15. Model For Dense Molecular Cloud Cores

    NASA Technical Reports Server (NTRS)

    Doty, Steven D.; Neufeld, David A.

    1997-01-01

    We present a detailed theoretical model for the thermal balance, chemistry, and radiative transfer within quiescent dense molecular cloud cores that contain a central protostar. In the interior of such cores, we expect the dust and gas temperatures to be well coupled, while in the outer regions CO rotational emissions dominate the gas cooling and the predicted gas temperature lies significantly below the dust temperature. Large spatial variations in the gas temperature are expected to affect the gas phase chemistry dramatically; in particular, the predicted water abundance varies by more than a factor of 1000 within cloud cores that contain luminous protostars. Based upon our predictions for the thermal and chemical structure of cloud cores, we have constructed self-consistent radiative transfer models to compute the line strengths and line profiles for transitions of (12)CO, (13)CO, C(18)O, ortho- and para-H2(16)O, ortho- and para-H2(18)O, and O I. We carried out a general parameter study to determine the dependence of the model predictions upon the parameters assumed for the source. We expect many of the far-infrared and submillimeter rotational transitions of water to be detectable either in emission or absorption with the use of the Infrared Space Observatory (ISO) and the Submillimeter Wave Astronomy Satellite. Quiescent, radiatively heated hot cores are expected to show low-gain maser emission in the 183 GHz 3(sub 13)-2(sub 20) water line, such as has been observed toward several hot core regions using ground-based telescopes. We predict the (3)P(sub l) - (3)P(sub 2) fine-structure transition of atomic oxygen near 63 micron to be in strong absorption against the continuum for many sources. Our model can also account successfully for recent ISO observations of absorption in rovibrational transitions of water toward the source AFGL 2591.

  16. Coded continuous wave meteor radar

    NASA Astrophysics Data System (ADS)

    Vierinen, Juha; Chau, Jorge L.; Pfeffer, Nico; Clahsen, Matthias; Stober, Gunter

    2016-03-01

    The concept of a coded continuous wave specular meteor radar (SMR) is described. The radar uses a continuously transmitted pseudorandom phase-modulated waveform, which has several advantages compared to conventional pulsed SMRs. The coding avoids range and Doppler aliasing, which are in some cases problematic with pulsed radars. Continuous transmissions maximize pulse compression gain, allowing operation at lower peak power than a pulsed system. With continuous coding, the temporal and spectral resolution are not dependent on the transmit waveform and they can be fairly flexibly changed after performing a measurement. The low signal-to-noise ratio before pulse compression, combined with independent pseudorandom transmit waveforms, allows multiple geographically separated transmitters to be used in the same frequency band simultaneously without significantly interfering with each other. Because the same frequency band can be used by multiple transmitters, the same interferometric receiver antennas can be used to receive multiple transmitters at the same time. The principles of the signal processing are discussed, in addition to discussion of several practical ways to increase computation speed, and how to optimally detect meteor echoes. Measurements from a campaign performed with a coded continuous wave SMR are shown and compared with two standard pulsed SMR measurements. The type of meteor radar described in this paper would be suited for use in a large-scale multi-static network of meteor radar transmitters and receivers. Such a system would be useful for increasing the number of meteor detections to obtain improved meteor radar data products.

  17. Aerodynamic design using numerical optimization

    NASA Technical Reports Server (NTRS)

    Murman, E. M.; Chapman, G. T.

    1983-01-01

    The procedure of using numerical optimization methods coupled with computational fluid dynamic (CFD) codes for the development of an aerodynamic design is examined. Several approaches that replace wind tunnel tests, develop pressure distributions and derive designs, or fulfill preset design criteria are presented. The method of Aerodynamic Design by Numerical Optimization (ADNO) is described and illustrated with examples.

  18. Multidisciplinary Optimization for Aerospace Using Genetic Optimization

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi; Hahn, Edward E.; Herrera, Claudia Y.

    2007-01-01

    In support of the ARMD guidelines NASA's Dryden Flight Research Center is developing a multidisciplinary design and optimization tool This tool will leverage existing tools and practices, and allow the easy integration and adoption of new state-of-the-art software. Optimization has made its way into many mainstream applications. For example NASTRAN(TradeMark) has its solution sequence 200 for Design Optimization, and MATLAB(TradeMark) has an Optimization Tool box. Other packages, such as ZAERO(TradeMark) aeroelastic panel code and the CFL3D(TradeMark) Navier-Stokes solver have no built in optimizer. The goal of the tool development is to generate a central executive capable of using disparate software packages ina cross platform network environment so as to quickly perform optimization and design tasks in a cohesive streamlined manner. A provided figure (Figure 1) shows a typical set of tools and their relation to the central executive. Optimization can take place within each individual too, or in a loop between the executive and the tool, or both.

  19. Quantum error-correcting codes over mixed alphabets

    NASA Astrophysics Data System (ADS)

    Wang, Zhuo; Yu, Sixia; Fan, Heng; Oh, C. H.

    2013-08-01

    We study the quantum error-correcting codes over mixed alphabets to deal with a more complicated and practical situation in which the physical systems for encoding may have different numbers of energy levels. In particular we investigate their constructions and propose the theory of quantum Singleton bound. Two kinds of code constructions are presented: a projection-based construction for general case and a graphical construction based on a graph-theoretical object composite coding clique dealing with the case of reducible alphabets. We find out some optimal one-error correcting or detecting codes over two alphabets. Our method of composite coding clique also sheds light on constructing standard quantum error-correcting codes, and other families of optimal codes are found.

  20. Maximally dense packings of two-dimensional convex and concave noncircular particles

    NASA Astrophysics Data System (ADS)

    Atkinson, Steven; Jiao, Yang; Torquato, Salvatore

    2012-09-01

    Dense packings of hard particles have important applications in many fields, including condensed matter physics, discrete geometry, and cell biology. In this paper, we employ a stochastic search implementation of the Torquato-Jiao adaptive-shrinking-cell (ASC) optimization scheme [Nature (London)NATUAS0028-083610.1038/nature08239 460, 876 (2009)] to find maximally dense particle packings in d-dimensional Euclidean space Rd. While the original implementation was designed to study spheres and convex polyhedra in d≥3, our implementation focuses on d=2 and extends the algorithm to include both concave polygons and certain complex convex or concave nonpolygonal particle shapes. We verify the robustness of this packing protocol by successfully reproducing the known putative optimal packings of congruent copies of regular pentagons and octagons, then employ it to suggest dense packing arrangements of congruent copies of certain families of concave crosses, convex and concave curved triangles (incorporating shapes resembling the Mercedes-Benz logo), and “moonlike” shapes. Analytical constructions are determined subsequently to obtain the densest known packings of these particle shapes. For the examples considered, we find that the densest packings of both convex and concave particles with central symmetry are achieved by their corresponding optimal Bravais lattice packings; for particles lacking central symmetry, the densest packings obtained are nonlattice periodic packings, which are consistent with recently-proposed general organizing principles for hard particles. Moreover, we find that the densest known packings of certain curved triangles are periodic with a four-particle basis, and we find that the densest known periodic packings of certain moonlike shapes possess no inherent symmetries. Our work adds to the growing evidence that particle shape can be used as a tuning parameter to achieve a diversity of packing structures.

  1. Maximally dense packings of two-dimensional convex and concave noncircular particles.

    PubMed

    Atkinson, Steven; Jiao, Yang; Torquato, Salvatore

    2012-09-01

    Dense packings of hard particles have important applications in many fields, including condensed matter physics, discrete geometry, and cell biology. In this paper, we employ a stochastic search implementation of the Torquato-Jiao adaptive-shrinking-cell (ASC) optimization scheme [Nature (London) 460, 876 (2009)] to find maximally dense particle packings in d-dimensional Euclidean space R(d). While the original implementation was designed to study spheres and convex polyhedra in d≥3, our implementation focuses on d=2 and extends the algorithm to include both concave polygons and certain complex convex or concave nonpolygonal particle shapes. We verify the robustness of this packing protocol by successfully reproducing the known putative optimal packings of congruent copies of regular pentagons and octagons, then employ it to suggest dense packing arrangements of congruent copies of certain families of concave crosses, convex and concave curved triangles (incorporating shapes resembling the Mercedes-Benz logo), and "moonlike" shapes. Analytical constructions are determined subsequently to obtain the densest known packings of these particle shapes. For the examples considered, we find that the densest packings of both convex and concave particles with central symmetry are achieved by their corresponding optimal Bravais lattice packings; for particles lacking central symmetry, the densest packings obtained are nonlattice periodic packings, which are consistent with recently-proposed general organizing principles for hard particles. Moreover, we find that the densest known packings of certain curved triangles are periodic with a four-particle basis, and we find that the densest known periodic packings of certain moonlike shapes possess no inherent symmetries. Our work adds to the growing evidence that particle shape can be used as a tuning parameter to achieve a diversity of packing structures. PMID:23030907

  2. Study of components and statistical reaction mechanism in simulation of nuclear process for optimized production of {sup 64}Cu and {sup 67}Ga medical radioisotopes using TALYS, EMPIRE and LISE++ nuclear reaction and evaporation codes

    SciTech Connect

    Nasrabadi, M. N. Sepiani, M.

    2015-03-30

    Production of medical radioisotopes is one of the most important tasks in the field of nuclear technology. These radioactive isotopes are mainly produced through variety nuclear process. In this research, excitation functions and nuclear reaction mechanisms are studied for simulation of production of these radioisotopes in the TALYS, EMPIRE and LISE++ reaction codes, then parameters and different models of nuclear level density as one of the most important components in statistical reaction models are adjusted for optimum production of desired radioactive yields.

  3. Study of components and statistical reaction mechanism in simulation of nuclear process for optimized production of 64Cu and 67Ga medical radioisotopes using TALYS, EMPIRE and LISE++ nuclear reaction and evaporation codes

    NASA Astrophysics Data System (ADS)

    Nasrabadi, M. N.; Sepiani, M.

    2015-03-01

    Production of medical radioisotopes is one of the most important tasks in the field of nuclear technology. These radioactive isotopes are mainly produced through variety nuclear process. In this research, excitation functions and nuclear reaction mechanisms are studied for simulation of production of these radioisotopes in the TALYS, EMPIRE & LISE++ reaction codes, then parameters and different models of nuclear level density as one of the most important components in statistical reaction models are adjusted for optimum production of desired radioactive yields.

  4. Cellulases and coding sequences

    DOEpatents

    Li, Xin-Liang; Ljungdahl, Lars G.; Chen, Huizhong

    2001-01-01

    The present invention provides three fungal cellulases, their coding sequences, recombinant DNA molecules comprising the cellulase coding sequences, recombinant host cells and methods for producing same. The present cellulases are from Orpinomyces PC-2.

  5. Cellulases and coding sequences

    DOEpatents

    Li, Xin-Liang; Ljungdahl, Lars G.; Chen, Huizhong

    2001-02-20

    The present invention provides three fungal cellulases, their coding sequences, recombinant DNA molecules comprising the cellulase coding sequences, recombinant host cells and methods for producing same. The present cellulases are from Orpinomyces PC-2.

  6. Multiple Turbo Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, D.; Pollara, F.

    1995-01-01

    A description is given of multiple turbo codes and a suitable decoder structure derived from an approximation to the maximum a posteriori probability (MAP) decision rule, which is substantially different from the decoder for two-code-based encoders.

  7. QR Code Mania!

    ERIC Educational Resources Information Center

    Shumack, Kellie A.; Reilly, Erin; Chamberlain, Nik

    2013-01-01

    space, has error-correction capacity, and can be read from any direction. These codes are used in manufacturing, shipping, and marketing, as well as in education. QR codes can be created to produce…

  8. Stabilized Acoustic Levitation of Dense Materials Using a High-Powered Siren

    NASA Technical Reports Server (NTRS)

    Gammell, P. M.; Croonquist, A.; Wang, T. G.

    1982-01-01

    Stabilized acoustic levitation and manipulation of dense (e.g., steel) objects of 1 cm diameter, using a high powered siren, was demonstrated in trials that investigated the harmonic content and spatial distribution of the acoustic field, as well as the effect of sample position and reflector geometries on the acoustic field. Although further optimization is possible, the most stable operation achieved is expected to be adequate for most containerless processing applications. Best stability was obtained with an open reflector system, using a flat lower reflector and a slightly concave upper one. Operation slightly below resonance enhances stability as this minimizes the second harmonic, which is suspected of being a particularly destabilizing influence.

  9. Energy-delay performance of giant spin Hall effect switching for dense magnetic memory

    NASA Astrophysics Data System (ADS)

    Manipatruni, Sasikanth; Nikonov, Dmitri E.; Young, Ian A.

    2014-10-01

    We show that the giant spin Hall effect (GSHE) magnetoresistive random access memory (MRAM) can enable better energy delay and voltage performance than MTJ spin torque devices at 10-30 nm scaled nanomagnet dimensions. We propose a dense bit cell composed of a folded electrode to enable scaling to sub-10 nm CMOS. We derive the energy-delay trajectory and energy-delay product of GSHE and MTJ devices with an energy minimum at the magnetic characteristic time. Optimized GSHE devices with PMA can enable low voltage (<0.1 V), scaled dimensions, and fast switching time (100 ps) at an average switching energy approaching 100 aJ/bit.

  10. STEEP32 computer code

    NASA Technical Reports Server (NTRS)

    Goerke, W. S.

    1972-01-01

    A manual is presented as an aid in using the STEEP32 code. The code is the EXEC 8 version of the STEEP code (STEEP is an acronym for shock two-dimensional Eulerian elastic plastic). The major steps in a STEEP32 run are illustrated in a sample problem. There is a detailed discussion of the internal organization of the code, including a description of each subroutine.

  11. Numerical Investigation of Entrainment of Turbulent Dense Currents

    NASA Astrophysics Data System (ADS)

    Bhaganaagar, Kiran; Nayamatulla, Manjure

    2016-04-01

    Entrainment in dense overflows has fundamental importance for understanding the transport of densest water in the ocean. Estimation of entrainment is extremely challenging and to-date we do not have a fundamental framework that parameterizes entrainment. A highly accurate direct numerical simulation and large eddy simulation solvers have been developed to simulate dense currents over range of smooth- and rough-surfaces. Simulations have been performed for both lock-exchange currents and constant flux currents. A mathematical framework has been developed to estimate entrainment of 2-D and 3-D dense currents. Entrainment has been calculated from first-principles as the relative change in the volume of the dense current in time with respect to the buoyancy forcing that drives the dense current. A combination of threshold method, wherein the height of current is evaluated as height corresponding to the specified threshold value and sorting method, wherein, the mixed fluid is sorted into bins ranging from dense fluid at the bottom to ambient fluid at the top has been used to evaluate the interface between the dense and ambient fluid. Entrainment is sensitive to the method of evaluation of the interface height. Finally, we obtained the dependency of entrainment parameter on non-dimensional parameters. Analysis has demonstrated lock-exchange currents have less mixing and entrainment for same Reynolds number and Froude's number than constant flux currents. The differences exist due to differences in nature of Kelvin-Helmholtz instabilities and lobe-cleft currents. Rough-bottom surfaces introduces additional dynamics of the dense currents. The spacing between the roughness elements has demonstrated to be important metric in entrainment parameters for lock-exchange currents. Densely spaced (D-type) currents travel slower as roughness causes hindrance on density current propagation due to enhanced drag and produces additional eddies and instabilities compared to sparsely

  12. Measuring Wind Ventilation of Dense Surface Snow

    NASA Astrophysics Data System (ADS)

    Drake, S. A.; Huwald, H.; Selker, J. S.; Higgins, C. W.; Lehning, M.; Thomas, C. K.

    2014-12-01

    Wind ventilation enhances exposure of suspended, canopy-captured and corniced snow to subsaturated air and can significantly increase sublimation rate. Although sublimation rate may be high for highly ventilated snow this snow regime represents a small fraction snow that resides in a basin potentially minimizing its influence on snow mass balance. In contrast, the vast majority of a seasonal snowpack typically resides as poorly ventilated surface snow. The sublimation rate of surface snow is often locally so small as to defy direct measurement but regionally pervasive enough that the integrated mass loss of frozen water across a basin may be significant on a seasonal basis. In a warming climate, sublimation rate increases even in subfreezing conditions because the equilibrium water vapor pressure over ice increases exponentially with temperature. To better understand the process of wintertime surface snow sublimation we need to quantify the depth to which turbulent and topographically driven pressure perturbations effect air exchange within the snowpack. Hypothetically, this active layer depth increases the effective ventilated snow surface area, enhancing sublimation above that given by a plane, impermeable snow surface. We designed and performed a novel set of field experiments at two sites in the Oregon Cascades during the 2014 winter season to examine the spectral attenuation of pressure perturbations with depth for dense snow as a function of turbulence intensity and snow permeability. We mounted a Campbell Scientific Irgason Integrated CO2 and H2O Open Path Gas Analyzer and 3-D Sonic Anemometer one meter above the snow to capture mean and turbulent wind forcing and placed outlets of four high precision ParoScientific 216B-102 pressure transducers at different depths to measure the depth-dependent pressure response to wind forcing. A GPS antenna captured data acquisition time with sufficient precision to synchronize a Campbell Scientific CR-3000 acquiring

  13. SUPPORTED DENSE CERAMIC MEMBRANES FOR OXYGEN SEPARATION

    SciTech Connect

    Timothy L. Ward

    2000-06-30

    . This successfully reduced cracking, however the films retained open porosity. The investigation of this concept will be continued in the final year of the project. Investigation of a metal organic chemical vapor deposition (MOCVD) method for defect mending in dense membranes was also initiated. An appropriate metal organic precursor (iron tetramethylheptanedionate) was identified whose deposition can be controlled by access to oxygen at temperatures in the 280-300 C range. Initial experiments have deposited iron oxide, but only on the membrane surface; thus refinement of this method will continue.

  14. Wheat Landraces Are Better Qualified as Potential Gene Pools at Ultraspaced rather than Densely Grown Conditions

    PubMed Central

    Ninou, Elissavet G.; Mylonas, Ioannis G.; Tokatlidis, Ioannis S.

    2014-01-01

    The negative relationship between the yield potential of a genotype and its competitive ability may constitute an obstacle to recognize outstanding genotypes within heterogeneous populations. This issue was investigated by growing six heterogeneous wheat landraces along with a pure-line commercial cultivar under both dense and widely spaced conditions. The performance of two landraces showed a perfect match to the above relationship. Although they lagged behind the cultivar by 64 and 38% at the dense stand, the reverse was true with spaced plants where they succeeded in out-yielding the cultivar by 58 and 73%, respectively. It was concluded that dense stand might undervalue a landrace as potential gene pool in order to apply single-plant selection targeting pure-line cultivars, attributable to inability of plants representing high yielding genotypes to exhibit their capacity due to competitive disadvantage. On the other side, the yield expression of individuals is optimized when density is low enough to preclude interplant competition. Therefore, the latter condition appears ideal to identify the most promising landrace for breeding and subsequently recognize the individuals representing the most outstanding genotypes. PMID:24955427

  15. An Adaptive Channel Access Method for Dynamic Super Dense Wireless Sensor Networks.

    PubMed

    Lei, Chunyang; Bie, Hongxia; Fang, Gengfa; Zhang, Xuekun

    2015-01-01

    Super dense and distributed wireless sensor networks have become very popular with the development of small cell technology, Internet of Things (IoT), Machine-to-Machine (M2M) communications, Vehicular-to-Vehicular (V2V) communications and public safety networks. While densely deployed wireless networks provide one of the most important and sustainable solutions to improve the accuracy of sensing and spectral efficiency, a new channel access scheme needs to be designed to solve the channel congestion problem introduced by the high dynamics of competing nodes accessing the channel simultaneously. In this paper, we firstly analyzed the channel contention problem using a novel normalized channel contention analysis model which provides information on how to tune the contention window according to the state of channel contention. We then proposed an adaptive channel contention window tuning algorithm in which the contention window tuning rate is set dynamically based on the estimated channel contention level. Simulation results show that our proposed adaptive channel access algorithm based on fast contention window tuning can achieve more than 95 % of the theoretical optimal throughput and 0 . 97 of fairness index especially in dynamic and dense networks. PMID:26633421

  16. Fast magnetic resonance imaging of the internal impact response of dense granular suspensions

    NASA Astrophysics Data System (ADS)

    Müller, Christoph; Penn, Alexander; Pruessmann, Klaas P.

    Dense granular suspensions exhibit a number of intriguing properties such as discontinuous shear-thickening and the formation of dynamic jamming fronts when impacted by a solid. Probing non-intrusively these phenomena experimentally in full three-dimensional systems is, however, highly challenging as suspensions are commonly opaque and thus, not accessible optically. Here we report the development and implementation of a fast magnetic resonance imaging (MRI) methodology allowing us to image the internal dynamics of dense granular suspensions at high temporal resolutions. An important facet of this work is the implementation of parallel MRI using tailored multi-channel receive hardware and the optimization of magnetic properties (susceptibility and NMR relaxivity) of the liquid phase. These two improvements enable us to utilize fast single-shot pulse sequences while yielding sufficient signal intensity at temporal resolutions of less than 50 ms. Furthermore, using motion-sensitive MR pulse sequences we are able to image bulk motion within the system and the response of dense granular suspensions to fast impacts.

  17. An Adaptive Channel Access Method for Dynamic Super Dense Wireless Sensor Networks

    PubMed Central

    Lei, Chunyang; Bie, Hongxia; Fang, Gengfa; Zhang, Xuekun

    2015-01-01

    Super dense and distributed wireless sensor networks have become very popular with the development of small cell technology, Internet of Things (IoT), Machine-to-Machine (M2M) communications, Vehicular-to-Vehicular (V2V) communications and public safety networks. While densely deployed wireless networks provide one of the most important and sustainable solutions to improve the accuracy of sensing and spectral efficiency, a new channel access scheme needs to be designed to solve the channel congestion problem introduced by the high dynamics of competing nodes accessing the channel simultaneously. In this paper, we firstly analyzed the channel contention problem using a novel normalized channel contention analysis model which provides information on how to tune the contention window according to the state of channel contention. We then proposed an adaptive channel contention window tuning algorithm in which the contention window tuning rate is set dynamically based on the estimated channel contention level. Simulation results show that our proposed adaptive channel access algorithm based on fast contention window tuning can achieve more than 95% of the theoretical optimal throughput and 0.97 of fairness index especially in dynamic and dense networks. PMID:26633421

  18. Color code identification in coded structured light.

    PubMed

    Zhang, Xu; Li, Youfu; Zhu, Limin

    2012-08-01

    Color code is widely employed in coded structured light to reconstruct the three-dimensional shape of objects. Before determining the correspondence, a very important step is to identify the color code. Until now, the lack of an effective evaluation standard has hindered the progress in this unsupervised classification. In this paper, we propose a framework based on the benchmark to explore the new frontier. Two basic facets of the color code identification are discussed, including color feature selection and clustering algorithm design. First, we adopt analysis methods to evaluate the performance of different color features, and the order of these color features in the discriminating power is concluded after a large number of experiments. Second, in order to overcome the drawback of K-means, a decision-directed method is introduced to find the initial centroids. Quantitative comparisons affirm that our method is robust with high accuracy, and it can find or closely approach the global peak. PMID:22859022

  19. Unified inference for sparse and dense longitudinal models.

    PubMed

    Kim, Seonjin; Zhao, Zhibiao

    2013-03-01

    In longitudinal data analysis, statistical inference for sparse data and dense data could be substantially different. For kernel smoothing estimate of the mean function, the convergence rates and limiting variance functions are different under the two scenarios. The latter phenomenon poses challenges for statistical inference as a subjective choice between the sparse and dense cases may lead to wrong conclusions. We develop self-normalization based methods that can adapt to the sparse and dense cases in a unified framework. Simulations show that the proposed methods outperform some existing methods. PMID:24966413

  20. Software Certification - Coding, Code, and Coders

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Holzmann, Gerard J.

    2011-01-01

    We describe a certification approach for software development that has been adopted at our organization. JPL develops robotic spacecraft for the exploration of the solar system. The flight software that controls these spacecraft is considered to be mission critical. We argue that the goal of a software certification process cannot be the development of "perfect" software, i.e., software that can be formally proven to be correct under all imaginable and unimaginable circumstances. More realistically, the goal is to guarantee a software development process that is conducted by knowledgeable engineers, who follow generally accepted procedures to control known risks, while meeting agreed upon standards of workmanship. We target three specific issues that must be addressed in such a certification procedure: the coding process, the code that is developed, and the skills of the coders. The coding process is driven by standards (e.g., a coding standard) and tools. The code is mechanically checked against the standard with the help of state-of-the-art static source code analyzers. The coders, finally, are certified in on-site training courses that include formal exams.