Science.gov

Sample records for optimal dense coding

  1. Optimized QKD BB84 protocol using quantum dense coding and CNOT gates: feasibility based on probabilistic optical devices

    NASA Astrophysics Data System (ADS)

    Gueddana, Amor; Attia, Moez; Chatta, Rihab

    2014-05-01

    In this work, we simulate a fiber-based Quantum Key Distribution Protocol (QKDP) BB84 working at the telecoms wavelength 1550 nm with taking into consideration an optimized attack strategy. We consider in our work a quantum channel composed by probabilistic Single Photon Source (SPS), single mode optical Fiber and quantum detector with high efficiency. We show the advantages of using the Quantum Dots (QD) embedded in micro-cavity compared to the Heralded Single Photon Sources (HSPS). Second, we show that Eve is always getting some information depending on the mean photon number per pulse of the used SPS and therefore, we propose an optimized version of the QKDP BB84 based on Quantum Dense Coding (QDC) that could be implemented by quantum CNOT gates. We evaluate the success probability of implementing the optimized QKDP BB84 when using nowadays probabilistic quantum optical devices for circuit realization. We use for our modeling an abstract probabilistic model of a CNOT gate based on linear optical components and having a success probability of sqrt (4/27), we take into consideration the best SPSs realizations, namely the QD and the HSPS, generating a single photon per pulse with a success probability of 0.73 and 0.37, respectively. We show that the protocol is totally secure against attacks but could be correctly implemented only with a success probability of few percent.

  2. DENSE MEDIA CYCLONE OPTIMIZATION

    SciTech Connect

    Gerald H. Luttrell

    2002-01-14

    During the past quarter, float-sink analyses were completed for four of seven circuits evaluated in this project. According to the commercial laboratory, the analyses for the remaining three sites will be finished by mid February 2002. In addition, it was necessary to repeat several of the float-sink tests to resolve problems identified during the analysis of the experimental data. In terms of accomplishments, a website is being prepared to distribute project findings and software to the public. This site will include (i) an operators manual for HMC operation and maintenance (already available in hard copy), (ii) an expert system software package for evaluating and optimizing HMC performance (in development), and (iii) a spreadsheet-based process model for plant designers (in development). Several technology transfer activities were also carried out including the publication of project results in proceedings and the training of plant operations via workshops.

  3. DENSE MEDIA CYCLONE OPTIMIZATION

    SciTech Connect

    Gerald H. Luttrell

    2002-04-11

    The test data obtained from the Baseline Assessment that compares the performance of the density traces to that of different sizes of coal particles is now complete. The experimental results show that the tracer data can indeed be used to accurately predict HMC performance. The following conclusions were drawn: (i) the tracer curve is slightly sharper than curve for coarsest size fraction of coal (probably due to the greater resolution of the tracer technique), (ii) the Ep increases with decreasing coal particle size, and (iii) the Ep values are not excessively large for the well-maintained HMC circuits. The major problems discovered were associated with improper apex-to-vortex finder ratios and particle hang-up due to media segregation. Only one plant yielded test data that were typical of a fully optimized level of performance.

  4. Deterministic dense coding with partially entangled states

    SciTech Connect

    Mozes, Shay; Reznik, Benni; Oppenheim, Jonathan

    2005-01-01

    The utilization of a d-level partially entangled state, shared by two parties wishing to communicate classical information without errors over a noiseless quantum channel, is discussed. We analytically construct deterministic dense coding schemes for certain classes of nonmaximally entangled states, and numerically obtain schemes in the general case. We study the dependency of the maximal alphabet size of such schemes on the partially entangled state shared by the two parties. Surprisingly, for d>2 it is possible to have deterministic dense coding with less than one ebit. In this case the number of alphabet letters that can be communicated by a single particle is between d and 2d. In general, we numerically find that the maximal alphabet size is any integer in the range [d,d{sup 2}] with the possible exception of d{sup 2}-1. We also find that states with less entanglement can have a greater deterministic communication capacity than other more entangled states.

  5. Relating quantum discord with the quantum dense coding capacity

    SciTech Connect

    Wang, Xin; Qiu, Liang Li, Song; Zhang, Chi; Ye, Bin

    2015-01-15

    We establish the relations between quantum discord and the quantum dense coding capacity in (n + 1)-particle quantum states. A necessary condition for the vanishing discord monogamy score is given. We also find that the loss of quantum dense coding capacity due to decoherence is bounded below by the sum of quantum discord. When these results are restricted to three-particle quantum states, some complementarity relations are obtained.

  6. Displaced photon states as resource for dense coding

    NASA Astrophysics Data System (ADS)

    Podoshvedov, Sergey A.

    2009-01-01

    We extend the analysis of dense coding protocol based on displaced photon states. The phase transformations with a displaced qubit allow for someone to transform five states to each other. Each “particle-carrier” (displaced qubit) carries an equal number of photons. On the receiving side, it is possible to decode all five outcomes using a linear optical scheme with beam splitters and on-off photodetectors that cannot discriminate different number of detected photons. Summarizing, interacting with only a single displaced qubit, it is possible to transmit two bits of information more. Optimal conditions to guarantee maximal communication rate log25 per “particle-carrier” (displaced qubit) and influence of decoherence on the communication rate are considered.

  7. Code Optimization Techniques

    SciTech Connect

    MAGEE,GLEN I.

    2000-08-03

    Computers transfer data in a number of different ways. Whether through a serial port, a parallel port, over a modem, over an ethernet cable, or internally from a hard disk to memory, some data will be lost. To compensate for that loss, numerous error detection and correction algorithms have been developed. One of the most common error correction codes is the Reed-Solomon code, which is a special subset of BCH (Bose-Chaudhuri-Hocquenghem) linear cyclic block codes. In the AURA project, an unmanned aircraft sends the data it collects back to earth so it can be analyzed during flight and possible flight modifications made. To counter possible data corruption during transmission, the data is encoded using a multi-block Reed-Solomon implementation with a possibly shortened final block. In order to maximize the amount of data transmitted, it was necessary to reduce the computation time of a Reed-Solomon encoding to three percent of the processor's time. To achieve such a reduction, many code optimization techniques were employed. This paper outlines the steps taken to reduce the processing time of a Reed-Solomon encoding and the insight into modern optimization techniques gained from the experience.

  8. Induction technology optimization code

    SciTech Connect

    Caporaso, G.J.; Brooks, A.L.; Kirbie, H.C.

    1992-08-21

    A code has been developed to evaluate relative costs of induction accelerator driver systems for relativistic klystrons. The code incorporates beam generation, transport and pulsed power system constraints to provide an integrated design tool. The code generates an injector/accelerator combination which satisfies the top level requirements and all system constraints once a small number of design choices have been specified (rise time of the injector voltage and aspect ratio of the ferrite induction cores, for example). The code calculates dimensions of accelerator mechanical assemblies and values of all electrical components. Cost factors for machined parts, raw materials and components are applied to yield a total system cost. These costs are then plotted as a function of the two design choices to enable selection of an optimum design based on various criteria. The Induction Technology Optimization Study (ITOS) was undertaken to examine viable combinations of a linear induction accelerator and a relativistic klystron (RK) for high power microwave production. It is proposed, that microwaves from the RK will power a high-gradient accelerator structure for linear collider development. Previous work indicates that the RK will require a nominal 3-MeV, 3-kA electron beam with a 100-ns flat top. The proposed accelerator-RK combination will be a high average power system capable of sustained microwave output at a 300-Hz pulse repetition frequency. The ITOS code models many combinations of injector, accelerator, and pulse power designs that will supply an RK with the beam parameters described above.

  9. Parallel sparse and dense information coding streams in the electrosensory midbrain.

    PubMed

    Sproule, Michael K J; Metzen, Michael G; Chacron, Maurice J

    2015-10-21

    Efficient processing of incoming sensory information is critical for an organism's survival. It has been widely observed across systems and species that the representation of sensory information changes across successive brain areas. Indeed, peripheral sensory neurons tend to respond densely to a broad range of sensory stimuli while more central neurons tend to instead respond sparsely to a narrow range of stimuli. Such a transition might be advantageous as sparse neural codes are thought to be metabolically efficient and optimize coding efficiency. Here we investigated whether the neural code transitions from dense to sparse within the midbrain Torus semicircularis (TS) of weakly electric fish. Confirming previous results, we found both dense and sparse coding neurons. However, subsequent histological classification revealed that most dense neurons projected to higher brain areas. Our results thus provide strong evidence against the hypothesis that the neural code transitions from dense to sparse in the electrosensory system. Rather, they support the alternative hypothesis that higher brain areas receive parallel streams of dense and sparse coded information from the electrosensory midbrain. We discuss the implications and possible advantages of such a coding strategy and argue that it is a general feature of sensory processing.

  10. Distributed quantum dense coding with two receivers in noisy environments

    NASA Astrophysics Data System (ADS)

    Das, Tamoghna; Prabhu, R.; SenDe, Aditi; Sen, Ujjwal

    2015-11-01

    We investigate the effect of noisy channels in a classical information transfer through a multipartite state which acts as a substrate for the distributed quantum dense coding protocol between several senders and two receivers. The situation is qualitatively different from the case with one or more senders and a single receiver. We obtain an upper bound on the multipartite capacity which is tightened in the case of the covariant noisy channel. We also establish a relation between the genuine multipartite entanglement of the shared state and the capacity of distributed dense coding using that state, both in the noiseless and the noisy scenarios. Specifically, we find that, in the case of multiple senders and two receivers, the corresponding generalized Greenberger-Horne-Zeilinger states possess higher dense coding capacities as compared to a significant fraction of pure states having the same multipartite entanglement.

  11. Controlled Dense Coding Using the Maximal Slice States

    NASA Astrophysics Data System (ADS)

    Liu, Jun; Mo, Zhi-wen; Sun, Shu-qin

    2016-04-01

    In this paper we investigate the controlled dense coding with the maximal slice states. Three schemes are presented. Our schemes employ the maximal slice states as quantum channel, which consists of the tripartite entangled state from the first party(Alice), the second party(Bob), the third party(Cliff). The supervisor(Cliff) can supervises and controls the channel between Alice and Bob via measurement. Through carrying out local von Neumann measurement, controlled-NOT operation and positive operator-valued measure(POVM), and introducing an auxiliary particle, we can obtain the success probability of dense coding. It is shown that the success probability of information transmitted from Alice to Bob is usually less than one. The average amount of information for each scheme is calculated in detail. These results offer deeper insight into quantum dense coding via quantum channels of partially entangled states.

  12. Deterministic dense coding and faithful teleportation with multipartite graph states

    SciTech Connect

    Huang, C.-Y.; Yu, I-C.; Lin, F.-L.; Hsu, L.-Y.

    2009-05-15

    We propose schemes to perform the deterministic dense coding and faithful teleportation with multipartite graph states. We also find the sufficient and necessary condition of a viable graph state for the proposed schemes. That is, for the associated graph, the reduced adjacency matrix of the Tanner-type subgraph between senders and receivers should be invertible.

  13. Sparse and dense coding of natural stimuli by distinct midbrain neuron subpopulations in weakly electric fish

    PubMed Central

    Vonderschen, Katrin; Chacron, Maurice J.

    2015-01-01

    While peripheral sensory neurons respond to natural stimuli with a broad range of spatiotemporal frequencies, central neurons instead respond sparsely to specific features in general. The nonlinear transformations leading to this emergent selectivity are not well understood. Here we characterized how the neural representation of stimuli changes across successive brain areas, using the electrosensory system of weakly electric fish as a model system. We found that midbrain torus semicircularis (TS) neurons were on average more selective in their responses than hindbrain electrosensory lateral line lobe (ELL) neurons. Further analysis revealed two categories of TS neurons: dense coding TS neurons that were ELL-like and sparse coding TS neurons that displayed selective responses. These neurons in general responded to preferred stimuli with few spikes and were mostly silent for other stimuli. We further investigated whether information about stimulus attributes was contained in the activities of ELL and TS neurons. To do so, we used a spike train metric to quantify how well stimuli could be discriminated based on spiking responses. We found that sparse coding TS neurons performed poorly even when their activities were combined compared with ELL and dense coding TS neurons. In contrast, combining the activities of as few as 12 dense coding TS neurons could lead to optimal discrimination. On the other hand, sparse coding TS neurons were better detectors of whether their preferred stimulus occurred compared with either dense coding TS or ELL neurons. Our results therefore suggest that the TS implements parallel detection and estimation of sensory input. PMID:21940609

  14. New optimal quantum convolutional codes

    NASA Astrophysics Data System (ADS)

    Zhu, Shixin; Wang, Liqi; Kai, Xiaoshan

    2015-04-01

    One of the most challenges to prove the feasibility of quantum computers is to protect the quantum nature of information. Quantum convolutional codes are aimed at protecting a stream of quantum information in a long distance communication, which are the correct generalization to the quantum domain of their classical analogs. In this paper, we construct some classes of quantum convolutional codes by employing classical constacyclic codes. These codes are optimal in the sense that they attain the Singleton bound for pure convolutional stabilizer codes.

  15. SWOC: Spectral Wavelength Optimization Code

    NASA Astrophysics Data System (ADS)

    Ruchti, G. R.

    2016-06-01

    SWOC (Spectral Wavelength Optimization Code) determines the wavelength ranges that provide the optimal amount of information to achieve the required science goals for a spectroscopic study. It computes a figure-of-merit for different spectral configurations using a user-defined list of spectral features, and, utilizing a set of flux-calibrated spectra, determines the spectral regions showing the largest differences among the spectra.

  16. Modular optimization code package: MOZAIK

    NASA Astrophysics Data System (ADS)

    Bekar, Kursat B.

    This dissertation addresses the development of a modular optimization code package, MOZAIK, for geometric shape optimization problems in nuclear engineering applications. MOZAIK's first mission, determining the optimal shape of the D2O moderator tank for the current and new beam tube configurations for the Penn State Breazeale Reactor's (PSBR) beam port facility, is used to demonstrate its capabilities and test its performance. MOZAIK was designed as a modular optimization sequence including three primary independent modules: the initializer, the physics and the optimizer, each having a specific task. By using fixed interface blocks among the modules, the code attains its two most important characteristics: generic form and modularity. The benefit of this modular structure is that the contents of the modules can be switched depending on the requirements of accuracy, computational efficiency, or compatibility with the other modules. Oak Ridge National Laboratory's discrete ordinates transport code TORT was selected as the transport solver in the physics module of MOZAIK, and two different optimizers, Min-max and Genetic Algorithms (GA), were implemented in the optimizer module of the code package. A distributed memory parallelism was also applied to MOZAIK via MPI (Message Passing Interface) to execute the physics module concurrently on a number of processors for various states in the same search. Moreover, dynamic scheduling was enabled to enhance load balance among the processors while running MOZAIK's physics module thus improving the parallel speedup and efficiency. In this way, the total computation time consumed by the physics module is reduced by a factor close to M, where M is the number of processors. This capability also encourages the use of MOZAIK for shape optimization problems in nuclear applications because many traditional codes related to radiation transport do not have parallel execution capability. A set of computational models based on the

  17. TRACKING CODE DEVELOPMENT FOR BEAM DYNAMICS OPTIMIZATION

    SciTech Connect

    Yang, L.

    2011-03-28

    Dynamic aperture (DA) optimization with direct particle tracking is a straight forward approach when the computing power is permitted. It can have various realistic errors included and is more close than theoretical estimations. In this approach, a fast and parallel tracking code could be very helpful. In this presentation, we describe an implementation of storage ring particle tracking code TESLA for beam dynamics optimization. It supports MPI based parallel computing and is robust as DA calculation engine. This code has been used in the NSLS-II dynamics optimizations and obtained promising performance.

  18. Search for optimal distance spectrum convolutional codes

    NASA Technical Reports Server (NTRS)

    Connor, Matthew C.; Perez, Lance C.; Costello, Daniel J., Jr.

    1993-01-01

    In order to communicate reliably and to reduce the required transmitter power, NASA uses coded communication systems on most of their deep space satellites and probes (e.g. Pioneer, Voyager, Galileo, and the TDRSS network). These communication systems use binary convolutional codes. Better codes make the system more reliable and require less transmitter power. However, there are no good construction techniques for convolutional codes. Thus, to find good convolutional codes requires an exhaustive search over the ensemble of all possible codes. In this paper, an efficient convolutional code search algorithm was implemented on an IBM RS6000 Model 580. The combination of algorithm efficiency and computational power enabled us to find, for the first time, the optimal rate 1/2, memory 14, convolutional code.

  19. Optimal Codes for the Burst Erasure Channel

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon

    2010-01-01

    Deep space communications over noisy channels lead to certain packets that are not decodable. These packets leave gaps, or bursts of erasures, in the data stream. Burst erasure correcting codes overcome this problem. These are forward erasure correcting codes that allow one to recover the missing gaps of data. Much of the recent work on this topic concentrated on Low-Density Parity-Check (LDPC) codes. These are more complicated to encode and decode than Single Parity Check (SPC) codes or Reed-Solomon (RS) codes, and so far have not been able to achieve the theoretical limit for burst erasure protection. A block interleaved maximum distance separable (MDS) code (e.g., an SPC or RS code) offers near-optimal burst erasure protection, in the sense that no other scheme of equal total transmission length and code rate could improve the guaranteed correctible burst erasure length by more than one symbol. The optimality does not depend on the length of the code, i.e., a short MDS code block interleaved to a given length would perform as well as a longer MDS code interleaved to the same overall length. As a result, this approach offers lower decoding complexity with better burst erasure protection compared to other recent designs for the burst erasure channel (e.g., LDPC codes). A limitation of the design is its lack of robustness to channels that have impairments other than burst erasures (e.g., additive white Gaussian noise), making its application best suited for correcting data erasures in layers above the physical layer. The efficiency of a burst erasure code is the length of its burst erasure correction capability divided by the theoretical upper limit on this length. The inefficiency is one minus the efficiency. The illustration compares the inefficiency of interleaved RS codes to Quasi-Cyclic (QC) LDPC codes, Euclidean Geometry (EG) LDPC codes, extended Irregular Repeat Accumulate (eIRA) codes, array codes, and random LDPC codes previously proposed for burst erasure

  20. Cross-code comparisons of mixing during the implosion of dense cylindrical and spherical shells

    NASA Astrophysics Data System (ADS)

    Joggerst, C. C.; Nelson, Anthony; Woodward, Paul; Lovekin, Catherine; Masser, Thomas; Fryer, Chris L.; Ramaprabhu, P.; Francois, Marianne; Rockefeller, Gabriel

    2014-10-01

    We present simulations of the implosion of a dense shell in two-dimensional (2D) spherical and cylindrical geometry performed with four different compressible, Eulerian codes: RAGE, FLASH, CASTRO, and PPM. We follow the growth of instabilities on the inner face of the dense shell. Three codes employed Cartesian grid geometry, and one (FLASH) employed polar grid geometry. While the codes are similar, they employ different advection algorithms, limiters, adaptive mesh refinement (AMR) schemes, and interface-preservation techniques. We find that the growth rate of the instability is largely insensitive to the choice of grid geometry or other implementation details specific to an individual code, provided the grid resolution is sufficiently fine. Overall, all simulations from different codes compare very well on the fine grids for which we tested them, though they show slight differences in small-scale mixing. Simulations produced by codes that explicitly limit numerical diffusion show a smaller amount of small-scale mixing than codes that do not. This difference is most prominent for low-mode perturbations where little instability finger interaction takes place, and less prominent for high- or multi-mode simulations where a great deal of interaction takes place, though it is still present. We present RAGE and FLASH simulations to quantify the initial perturbation amplitude to wavelength ratio at which metrics of mixing agree across codes, and find that bubble/spike amplitudes are converged for low-mode and high-mode simulations in which the perturbation amplitude is more than 1% and 5% of the wavelength of the perturbation, respectively. Other metrics of small-scale mixing depend on details of multi-fluid advection and do not converge between codes for the resolutions that were accessible.

  1. Efficient simultaneous dense coding and teleportation with two-photon four-qubit cluster states

    NASA Astrophysics Data System (ADS)

    Zhang, Cai; Situ, Haozhen; Li, Qin; He, Guang Ping

    2016-08-01

    We firstly propose a simultaneous dense coding protocol with two-photon four-qubit cluster states in which two receivers can simultaneously get their respective classical information sent by a sender. Because each photon has two degrees of freedom, the protocol will achieve a high transmittance. The security of the simultaneous dense coding protocol has also been analyzed. Secondly, we investigate how to simultaneously teleport two different quantum states with polarization and path degree of freedom using cluster states to two receivers, respectively, and discuss its security. The preparation and transmission of two-photon four-qubit cluster states is less difficult than that of four-photon entangled states, and it has been experimentally generated with nearly perfect fidelity and high generation rate. Thus, our protocols are feasible with current quantum techniques.

  2. Development and Benchmarking of a Hybrid PIC Code For Dense Plasmas and Fast Ignition

    SciTech Connect

    Witherspoon, F. Douglas; Welch, Dale R.; Thompson, John R.; MacFarlane, Joeseph J.; Phillips, Michael W.; Bruner, Nicki; Mostrom, Chris; Thoma, Carsten; Clark, R. E.; Bogatu, Nick; Kim, Jin-Soo; Galkin, Sergei; Golovkin, Igor E.; Woodruff, P. R.; Wu, Linchun; Messer, Sarah J.

    2014-05-20

    Radiation processes play an important role in the study of both fast ignition and other inertial confinement schemes, such as plasma jet driven magneto-inertial fusion, both in their effect on energy balance, and in generating diagnostic signals. In the latter case, warm and hot dense matter may be produced by the convergence of a plasma shell formed by the merging of an assembly of high Mach number plasma jets. This innovative approach has the potential advantage of creating matter of high energy densities in voluminous amount compared with high power lasers or particle beams. An important application of this technology is as a plasma liner for the flux compression of magnetized plasma to create ultra-high magnetic fields and burning plasmas. HyperV Technologies Corp. has been developing plasma jet accelerator technology in both coaxial and linear railgun geometries to produce plasma jets of sufficient mass, density, and velocity to create such imploding plasma liners. An enabling tool for the development of this technology is the ability to model the plasma dynamics, not only in the accelerators themselves, but also in the resulting magnetized target plasma and within the merging/interacting plasma jets during transport to the target. Welch pioneered numerical modeling of such plasmas (including for fast ignition) using the LSP simulation code. Lsp is an electromagnetic, parallelized, plasma simulation code under development since 1995. It has a number of innovative features making it uniquely suitable for modeling high energy density plasmas including a hybrid fluid model for electrons that allows electrons in dense plasmas to be modeled with a kinetic or fluid treatment as appropriate. In addition to in-house use at Voss Scientific, several groups carrying out research in Fast Ignition (LLNL, SNL, UCSD, AWE (UK), and Imperial College (UK)) also use LSP. A collaborative team consisting of HyperV Technologies Corp., Voss Scientific LLC, FAR-TECH, Inc., Prism

  3. Code Differentiation for Hydrodynamic Model Optimization

    SciTech Connect

    Henninger, R.J.; Maudlin, P.J.

    1999-06-27

    Use of a hydrodynamics code for experimental data fitting purposes (an optimization problem) requires information about how a computed result changes when the model parameters change. These so-called sensitivities provide the gradient that determines the search direction for modifying the parameters to find an optimal result. Here, the authors apply code-based automatic differentiation (AD) techniques applied in the forward and adjoint modes to two problems with 12 parameters to obtain these gradients and compare the computational efficiency and accuracy of the various methods. They fit the pressure trace from a one-dimensional flyer-plate experiment and examine the accuracy for a two-dimensional jet-formation problem. For the flyer-plate experiment, the adjoint mode requires similar or less computer time than the forward methods. Additional parameters will not change the adjoint mode run time appreciably, which is a distinct advantage for this method. Obtaining ''accurate'' sensitivities for the j et problem parameters remains problematic.

  4. Overcoming a limitation of deterministic dense coding with a nonmaximally entangled initial state

    SciTech Connect

    Bourdon, P. S.; Gerjuoy, E.

    2010-02-15

    Under two-party deterministic dense coding, Alice communicates (perfectly distinguishable) messages to Bob via a qudit from a pair of entangled qudits in pure state |{Psi}>. If |{Psi}> represents a maximally entangled state (i.e., each of its Schmidt coefficients is {radical}(1/d)), then Alice can convey to Bob one of d{sup 2} distinct messages. If |{Psi}> is not maximally entangled, then Ji et al. [Phys. Rev. A 73, 034307 (2006)] have shown that under the original deterministic dense-coding protocol, in which messages are encoded by unitary operations performed on Alice's qudit, it is impossible to encode d{sup 2}-1 messages. Encoding d{sup 2}-2 messages is possible; see, for example, the numerical studies by Mozes et al. [Phys. Rev. A 71, 012311 (2005)]. Answering a question raised by Wu et al. [Phys. Rev. A 73, 042311 (2006)], we show that when |{Psi}> is not maximally entangled, the communications limit of d{sup 2}-2 messages persists even when the requirement that Alice encode by unitary operations on her qudit is weakened to allow encoding by more general quantum operators. We then describe a dense-coding protocol that can overcome this limitation with high probability, assuming the largest Schmidt coefficient of |{Psi}> is sufficiently close to {radical}(1/d). In this protocol, d{sup 2}-2 of the messages are encoded via unitary operations on Alice's qudit, and the final (d{sup 2}-1)-th message is encoded via a non-trace-preserving quantum operation.

  5. Complete Distributed Hyper-Entangled-Bell-State Analysis and Quantum Super Dense Coding

    NASA Astrophysics Data System (ADS)

    Zheng, Chunhong; Gu, Yongjian; Li, Wendong; Wang, Zhaoming; Zhang, Jiying

    2016-02-01

    We propose a protocol to implement the distributed hyper-entangled-Bell-state analysis (HBSA) for photonic qubits with weak cross-Kerr nonlinearities, QND photon-number-resolving detection, and some linear optical elements. The distinct feature of our scheme is that the BSA for two different degrees of freedom can be implemented deterministically and nondestructively. Based on the present HBSA, we achieve quantum super dense coding with double information capacity, which makes our scheme more significant for long-distance quantum communication.

  6. Effects of quantum noises and noisy quantum operations on entanglement and special dense coding

    SciTech Connect

    Quek, Sylvanus; Li Ziang; Yeo Ye

    2010-02-15

    We show how noncommuting noises could cause a Bell state {chi}{sub 0} to suffer entanglement sudden death (ESD). ESD may similarly occur when a noisy operation acts, if the corresponding Hamiltonian and Lindblad operator do not commute. We study the implications of these in special dense coding S. When noises that cause ESD act, we show that {chi}{sub 0} may lose its capacity for S before ESD occurs. Similarly, {chi}{sub 0} may fail to yield information transfer better than classically possible when the encoding operations are noisy, though entanglement is not destroyed in the process.

  7. Optimizing Extender Code for NCSX Analyses

    SciTech Connect

    M. Richman, S. Ethier, and N. Pomphrey

    2008-01-22

    Extender is a parallel C++ code for calculating the magnetic field in the vacuum region of a stellarator. The code was optimized for speed and augmented with tools to maintain a specialized NetCDF database. Two parallel algorithms were examined. An even-block work-distribution scheme was comparable in performance to a master-slave scheme. Large speedup factors were achieved by representing the plasma surface with a spline rather than Fourier series. The accuracy of this representation and the resulting calculations relied on the density of the spline mesh. The Fortran 90 module db access was written to make it easy to store Extender output in a manageable database. New or updated data can be added to existing databases. A generalized PBS job script handles the generation of a database from scratch

  8. Optimal zone coding using the slant transform

    SciTech Connect

    Zadiraka, V.K.; Evtushenko, V.N.

    1995-03-01

    Discrete orthogonal transforms (DOTs) are widely used in digital signal processing, image coding and compression, systems theory, communication, and control. A special representative of the class of DOTs with nonsinusoidal basis functions is the slant transform, which is distinguished by the presence of a slanted vector with linearly decreasing components in its basis. The slant transform of fourth and eighth orders was introduced in 1971 by Enomoto and Shibata especially for efficient representation of the video signal in line sections with smooth variation of brightness. It has been used for television image coding. Pratt, Chen, and Welch generalized the slant transform to vectors of any dimension N = 2{sup n} and two-dimensional arrays, and derived posterior estimates of reconstruction error with zonal image compression (the zones were chosen by trial and error) for various transforms. These estimates show that, for the same N and the same compression ratio {tau}, the slant transform is inferior to the Karhunen - Loeve transform and superior to Walsh and Fourier transforms. In this paper, we derive prior estimates of the reconstruction error for the slant transform in zone coding and suggest an optimal technique for zone selection.

  9. Optimization Principles for the Neural Code

    NASA Astrophysics Data System (ADS)

    Deweese, Michael Robert

    1995-01-01

    Animals receive information from the world in the form of continuous functions of time. At a very early stage in processing, however, these continuous signals are converted into discrete sequences of identical "spikes". All information that the brain receives about the outside world is encoded in the arrival times of these spikes. The goal of this thesis is to determine if there is a universal principle at work in this neural code. We are motivated by several recent experiments on a wide range of sensory systems which share four main features: High information rates, moderate signal to noise ratio, efficient use of the spike train entropy to encode the signal, and the ability to extract nearly all the information encoded in the spike train with a linear response function triggered by the spikes. We propose that these features can be understood in terms of codes "designed" to maximize information flow. To test this idea, we use the fact that any point process encoding of an analog signal embedded in noise can be written in the language of a threshold crossing model to develop a systematic expansion for the transmitted information about the Poisson limit--the limit where there are no correlations between the spikes. All codes take the same simple form in the Poisson limit, and all of the seemingly unrelated features of the data arise naturally when we optimize a simple linear filtered threshold crossing model. We make a new prediction: Finding the optimum requires adaptation to the statistical structure of the signal and noise, not just to DC offsets. The only disagreement we find is that real neurons outperform our model in the task it was optimized for--they transmit much more information. We then place an upper bound on the amount of information available from the leading term in the Poisson expansion for any possible encoding, and find that real neurons do exceedingly well even by this standard. We conclude that several important features of the neural code can

  10. Statistical physics, optimization and source coding

    NASA Astrophysics Data System (ADS)

    Zechhina, Riccardo

    2005-06-01

    The combinatorial problem of satisfying a given set of constraints that depend on N discrete variables is a fundamental one in optimization and coding theory. Even for instances of randomly generated problems, the question ``does there exist an assignment to the variables that satisfies all constraints?'' may become extraordinarily difficult to solve in some range of parameters where a glass phase sets in. We shall provide a brief review of the recent advances in the statistical mechanics approach to these satisfiability problems and show how the analytic results have helped to design a new class of message-passing algorithms -- the survey propagation (SP) algorithms -- that can efficiently solve some combinatorial problems considered intractable. As an application, we discuss how the packing properties of clusters of solutions in randomly generated satisfiability problems can be exploited in the design of simple lossy data compression algorithms.

  11. Optimality principles for the visual code

    NASA Astrophysics Data System (ADS)

    Pitkow, Xaq

    One way to try to make sense of the complexities of our visual system is to hypothesize that evolution has developed nearly optimal solutions to the problems organisms face in the environment. In this thesis, we study two such principles of optimality for the visual code. In the first half of this dissertation, we consider the principle of decorrelation. Influential theories assert that the center-surround receptive fields of retinal neurons remove spatial correlations present in the visual world. It has been proposed that this decorrelation serves to maximize information transmission to the brain by avoiding transfer of redundant information through optic nerve fibers of limited capacity. While these theories successfully account for several aspects of visual perception, the notion that the outputs of the retina are less correlated than its inputs has never been directly tested at the site of the putative information bottleneck, the optic nerve. We presented visual stimuli with naturalistic image correlations to the salamander retina while recording responses of many retinal ganglion cells using a microelectrode array. The output signals of ganglion cells are indeed decorrelated compared to the visual input, but the receptive fields are only partly responsible. Much of the decorrelation is due to the nonlinear processing by neurons rather than the linear receptive fields. This form of decorrelation dramatically limits information transmission. Instead of improving coding efficiency we show that the nonlinearity is well suited to enable a combinatorial code or to signal robust stimulus features. In the second half of this dissertation, we develop an ideal observer model for the task of discriminating between two small stimuli which move along an unknown retinal trajectory induced by fixational eye movements. The ideal observer is provided with the responses of a model retina and guesses the stimulus identity based on the maximum likelihood rule, which involves sums

  12. Performing aggressive code optimization with an ability to rollback changes made by the aggressive optimizations

    DOEpatents

    Gschwind, Michael K

    2013-07-23

    Mechanisms for aggressively optimizing computer code are provided. With these mechanisms, a compiler determines an optimization to apply to a portion of source code and determines if the optimization as applied to the portion of source code will result in unsafe optimized code that introduces a new source of exceptions being generated by the optimized code. In response to a determination that the optimization is an unsafe optimization, the compiler generates an aggressively compiled code version, in which the unsafe optimization is applied, and a conservatively compiled code version in which the unsafe optimization is not applied. The compiler stores both versions and provides them for execution. Mechanisms are provided for switching between these versions during execution in the event of a failure of the aggressively compiled code version. Moreover, predictive mechanisms are provided for predicting whether such a failure is likely.

  13. Optimal source codes for geometrically distributed integer alphabets

    NASA Technical Reports Server (NTRS)

    Gallager, R. G.; Van Voorhis, D. C.

    1975-01-01

    An approach is shown for using the Huffman algorithm indirectly to prove the optimality of a code for an infinite alphabet if an estimate concerning the nature of the code can be made. Attention is given to nonnegative integers with a geometric probability assignment. The particular distribution considered arises in run-length coding and in encoding protocol information in data networks. Questions of redundancy of the optimal code are also investigated.

  14. Optimal coding schemes for conflict-free channel access

    NASA Astrophysics Data System (ADS)

    Browning, Douglas W.; Thomas, John B.

    1989-10-01

    A method is proposed for conflict-free access of a broadcast channel. The method uses a variable-length coding scheme to determine which user gains access to the channel. For an idle channel, an equation for optimal expected overhead is derived and a coding scheme that produces optimal codes is presented. Algorithms for generating optimal codes for access on a busy channel are discussed. Suboptimal schemes are found that perform in a nearly optimal fashion. The method is shown to be superior in performance to previously developed conflict-free channel access schemes.

  15. Sparse coding based dense feature representation model for hyperspectral image classification

    NASA Astrophysics Data System (ADS)

    Oguslu, Ender; Zhou, Guoqing; Zheng, Zezhong; Iftekharuddin, Khan; Li, Jiang

    2015-11-01

    We present a sparse coding based dense feature representation model (a preliminary version of the paper was presented at the SPIE Remote Sensing Conference, Dresden, Germany, 2013) for hyperspectral image (HSI) classification. The proposed method learns a new representation for each pixel in HSI through the following four steps: sub-band construction, dictionary learning, encoding, and feature selection. The new representation usually has a very high dimensionality requiring a large amount of computational resources. We applied the l1/lq regularized multiclass logistic regression technique to reduce the size of the new representation. We integrated the method with a linear support vector machine (SVM) and a composite kernels SVM (CKSVM) to discriminate different types of land cover. We evaluated the proposed algorithm on three well-known HSI datasets and compared our method to four recently developed classification methods: SVM, CKSVM, simultaneous orthogonal matching pursuit, and image fusion and recursive filtering. Experimental results show that the proposed method can achieve better overall and average classification accuracies with a much more compact representation leading to more efficient sparse models for HSI classification.

  16. Optimality Of Variable-Length Codes

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Miller, Warner H.; Rice, Robert F.

    1994-01-01

    Report presents analysis of performances of conceptual Rice universal noiseless coders designed to provide efficient compression of data over wide range of source-data entropies. Includes predictive preprocessor that maps source data into sequence of nonnegative integers and variable-length-coding processor, which adapts to varying entropy of source data by selecting whichever one of number of optional codes yields shortest codeword.

  17. Analysis of the optimality of the standard genetic code.

    PubMed

    Kumar, Balaji; Saini, Supreet

    2016-07-19

    Many theories have been proposed attempting to explain the origin of the genetic code. While strong reasons remain to believe that the genetic code evolved as a frozen accident, at least for the first few amino acids, other theories remain viable. In this work, we test the optimality of the standard genetic code against approximately 17 million genetic codes, and locate 29 which outperform the standard genetic code at the following three criteria: (a) robustness to point mutation; (b) robustness to frameshift mutation; and (c) ability to encode additional information in the coding region. We use a genetic algorithm to generate and score codes from different parts of the associated landscape, which are, as a result, presumably more representative of the entire landscape. Our results show that while the genetic code is sub-optimal for robustness to frameshift mutation and the ability to encode additional information in the coding region, it is very strongly selected for robustness to point mutation. This coupled with the observation that the different performance indicator scores for a particular genetic code are negatively correlated makes the standard genetic code nearly optimal for the three criteria tested in this work. PMID:27327359

  18. Effects of intrinsic decoherence on various correlations and quantum dense coding in a two superconducting charge qubit system

    NASA Astrophysics Data System (ADS)

    Wang, Fei; Maimaitiyiming-Tusun; Parouke-Paerhati; Ahmad-Abliz

    2015-09-01

    The influence of intrinsic decoherence on various correlations and dense coding in a model which consists of two identical superconducting charge qubits coupled by a fixed capacitor is investigated. The results show that, despite the intrinsic decoherence, the correlations as well as the dense coding channel capacity can be effectively increased via the combination of system parameters, i.e., the mutual coupling energy between the two charge qubits is larger than the Josephson energy of the qubit. The bigger the difference between them is, the better the effect is. Project supported by the Project to Develop Outstanding Young Scientific Talents of China (Grant No. 2013711019), the Natural Science Foundation of Xinjiang Province, China (Grant No. 2012211A052), the Foundation for Key Program of Ministry of Education of China (Grant No. 212193), and the Innovative Foundation for Graduate Students Granted by the Key Subjects of Theoretical Physics of Xinjiang Province, China (Grant No. LLWLL201301).

  19. Noise-optimal capture for coded exposure photography

    NASA Astrophysics Data System (ADS)

    Huang, Kuihua; Zhang, Jun; Li, Guohui

    2012-09-01

    Searching for the optimal shutter sequence is the key problem in coded exposure photography. Previous shutter sequence searching methods focus on the point spread function estimation and invertibility, and ignore the influence of the scene light level or avoid noise calibration of real cameras. For practical purposes, we address the problem of finding an optimal shutter sequence for coded exposure photography in the presence of photon noise. We analyze the effect of photon noise on the optimal shutter sequence in terms of deconvolution noise and derive analytic formulas. We show that Raskar's code is a special case of our analysis. Based on noise calibration of the coded exposure camera, an effective fitness function is proposed, and using our carefully designed genetic algorithm, we obtain the optimal shutter sequence with little running time. Experimental results with synthetic and real data demonstrate the advantage of our approach compared to the state of the art approach.

  20. Optimization of KINETICS Chemical Computation Code

    NASA Technical Reports Server (NTRS)

    Donastorg, Cristina

    2012-01-01

    NASA JPL has been creating a code in FORTRAN called KINETICS to model the chemistry of planetary atmospheres. Recently there has been an effort to introduce Message Passing Interface (MPI) into the code so as to cut down the run time of the program. There has been some implementation of MPI into KINETICS; however, the code could still be more efficient than it currently is. One way to increase efficiency is to send only certain variables to all the processes when an MPI subroutine is called and to gather only certain variables when the subroutine is finished. Therefore, all the variables that are used in three of the main subroutines needed to be investigated. Because of the sheer amount of code that there is to comb through this task was given as a ten-week project. I have been able to create flowcharts outlining the subroutines, common blocks, and functions used within the three main subroutines. From these flowcharts I created tables outlining the variables used in each block and important information about each. All this information will be used to determine how to run MPI in KINETICS in the most efficient way possible.

  1. Optimal periodic binary codes of lengths 28 to 64

    NASA Technical Reports Server (NTRS)

    Tyler, S.; Keston, R.

    1980-01-01

    Results from computer searches performed to find repeated binary phase coded waveforms with optimal periodic autocorrelation functions are discussed. The best results for lengths 28 to 64 are given. The code features of major concern are where (1) the peak sidelobe in the autocorrelation function is small and (2) the sum of the squares of the sidelobes in the autocorrelation function is small.

  2. Optimizing Nuclear Physics Codes on the XT5

    SciTech Connect

    Hartman-Baker, Rebecca J; Nam, Hai Ah

    2011-01-01

    Scientists studying the structure and behavior of the atomic nucleus require immense high-performance computing resources to gain scientific insights. Several nuclear physics codes are capable of scaling to more than 100,000 cores on Oak Ridge National Laboratory's petaflop Cray XT5 system, Jaguar. In this paper, we present our work on optimizing codes in the nuclear physics domain.

  3. Optimization of Particle-in-Cell Codes on RISC Processors

    NASA Technical Reports Server (NTRS)

    Decyk, Viktor K.; Karmesin, Steve Roy; Boer, Aeint de; Liewer, Paulette C.

    1996-01-01

    General strategies are developed to optimize particle-cell-codes written in Fortran for RISC processors which are commonly used on massively parallel computers. These strategies include data reorganization to improve cache utilization and code reorganization to improve efficiency of arithmetic pipelines.

  4. The effect of code expanding optimizations on instruction cache design

    NASA Technical Reports Server (NTRS)

    Chen, William Y.; Chang, Pohua P.; Conte, Thomas M.; Hwu, Wen-Mei W.

    1991-01-01

    It is shown that code expanding optimizations have strong and non-intuitive implications on instruction cache design. Three types of code expanding optimizations are studied: instruction placement, function inline expansion, and superscalar optimizations. Overall, instruction placement reduces the miss ratio of small caches. Function inline expansion improves the performance for small cache sizes, but degrades the performance of medium caches. Superscalar optimizations increases the cache size required for a given miss ratio. On the other hand, they also increase the sequentiality of instruction access so that a simple load-forward scheme effectively cancels the negative effects. Overall, it is shown that with load forwarding, the three types of code expanding optimizations jointly improve the performance of small caches and have little effect on large caches.

  5. Optimization of focality and direction in dense electrode array transcranial direct current stimulation (tDCS)

    NASA Astrophysics Data System (ADS)

    Guler, Seyhmus; Dannhauer, Moritz; Erem, Burak; Macleod, Rob; Tucker, Don; Turovets, Sergei; Luu, Phan; Erdogmus, Deniz; Brooks, Dana H.

    2016-06-01

    Objective. Transcranial direct current stimulation (tDCS) aims to alter brain function non-invasively via electrodes placed on the scalp. Conventional tDCS uses two relatively large patch electrodes to deliver electrical current to the brain region of interest (ROI). Recent studies have shown that using dense arrays containing up to 512 smaller electrodes may increase the precision of targeting ROIs. However, this creates a need for methods to determine effective and safe stimulus patterns as the number of degrees of freedom is much higher with such arrays. Several approaches to this problem have appeared in the literature. In this paper, we describe a new method for calculating optimal electrode stimulus patterns for targeted and directional modulation in dense array tDCS which differs in some important aspects with methods reported to date. Approach. We optimize stimulus pattern of dense arrays with fixed electrode placement to maximize the current density in a particular direction in the ROI. We impose a flexible set of safety constraints on the current power in the brain, individual electrode currents, and total injected current, to protect subject safety. The proposed optimization problem is convex and thus efficiently solved using existing optimization software to find unique and globally optimal electrode stimulus patterns. Main results. Solutions for four anatomical ROIs based on a realistic head model are shown as exemplary results. To illustrate the differences between our approach and previously introduced methods, we compare our method with two of the other leading methods in the literature. We also report on extensive simulations that show the effect of the values chosen for each proposed safety constraint bound on the optimized stimulus patterns. Significance. The proposed optimization approach employs volume based ROIs, easily adapts to different sets of safety constraints, and takes negligible time to compute. An in-depth comparison study gives

  6. Subband Image Coding with Jointly Optimized Quantizers

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Chung, Wilson C.; Smith Mark J. T.

    1995-01-01

    An iterative design algorithm for the joint design of complexity- and entropy-constrained subband quantizers and associated entropy coders is proposed. Unlike conventional subband design algorithms, the proposed algorithm does not require the use of various bit allocation algorithms. Multistage residual quantizers are employed here because they provide greater control of the complexity-performance tradeoffs, and also because they allow efficient and effective high-order statistical modeling. The resulting subband coder exploits statistical dependencies within subbands, across subbands, and across stages, mainly through complexity-constrained high-order entropy coding. Experimental results demonstrate that the complexity-rate-distortion performance of the new subband coder is exceptional.

  7. Optimal Grouping and Matching for Network-Coded Cooperative Communications

    SciTech Connect

    Sharma, S; Shi, Y; Hou, Y T; Kompella, S; Midkiff, S F

    2011-11-01

    Network-coded cooperative communications (NC-CC) is a new advance in wireless networking that exploits network coding (NC) to improve the performance of cooperative communications (CC). However, there remains very limited understanding of this new hybrid technology, particularly at the link layer and above. This paper fills in this gap by studying a network optimization problem that requires joint optimization of session grouping, relay node grouping, and matching of session/relay groups. After showing that this problem is NP-hard, we present a polynomial time heuristic algorithm to this problem. Using simulation results, we show that our algorithm is highly competitive and can produce near-optimal results.

  8. Code aperture optimization for spectrally agile compressive imaging.

    PubMed

    Arguello, Henry; Arce, Gonzalo R

    2011-11-01

    Coded aperture snapshot spectral imaging (CASSI) provides a mechanism for capturing a 3D spectral cube with a single shot 2D measurement. In many applications selective spectral imaging is sought since relevant information often lies within a subset of spectral bands. Capturing and reconstructing all the spectral bands in the observed image cube, to then throw away a large portion of this data, is inefficient. To this end, this paper extends the concept of CASSI to a system admitting multiple shot measurements, which leads not only to higher quality of reconstruction but also to spectrally selective imaging when the sequence of code aperture patterns is optimized. The aperture code optimization problem is shown to be analogous to the optimization of a constrained multichannel filter bank. The optimal code apertures allow the decomposition of the CASSI measurement into several subsets, each having information from only a few selected spectral bands. The rich theory of compressive sensing is used to effectively reconstruct the spectral bands of interest from the measurements. A number of simulations are developed to illustrate the spectral imaging characteristics attained by optimal aperture codes.

  9. A systematic method of interconnection optimization for dense-array concentrator photovoltaic system.

    PubMed

    Siaw, Fei-Lu; Chong, Kok-Keong

    2013-01-01

    This paper presents a new systematic approach to analyze all possible array configurations in order to determine the most optimal dense-array configuration for concentrator photovoltaic (CPV) systems. The proposed method is fast, simple, reasonably accurate, and very useful as a preliminary study before constructing a dense-array CPV panel. Using measured flux distribution data, each CPV cells' voltage and current values at three critical points which are at short-circuit, open-circuit, and maximum power point are determined. From there, an algorithm groups the cells into basic modules. The next step is I-V curve prediction, to find the maximum output power of each array configuration. As a case study, twenty different I-V predictions are made for a prototype of nonimaging planar concentrator, and the array configuration that yields the highest output power is determined. The result is then verified by assembling and testing of an actual dense-array on the prototype. It was found that the I-V curve closely resembles simulated I-V prediction, and measured maximum output power varies by only 1.34%.

  10. A Systematic Method of Interconnection Optimization for Dense-Array Concentrator Photovoltaic System

    PubMed Central

    Siaw, Fei-Lu

    2013-01-01

    This paper presents a new systematic approach to analyze all possible array configurations in order to determine the most optimal dense-array configuration for concentrator photovoltaic (CPV) systems. The proposed method is fast, simple, reasonably accurate, and very useful as a preliminary study before constructing a dense-array CPV panel. Using measured flux distribution data, each CPV cells' voltage and current values at three critical points which are at short-circuit, open-circuit, and maximum power point are determined. From there, an algorithm groups the cells into basic modules. The next step is I-V curve prediction, to find the maximum output power of each array configuration. As a case study, twenty different I-V predictions are made for a prototype of nonimaging planar concentrator, and the array configuration that yields the highest output power is determined. The result is then verified by assembling and testing of an actual dense-array on the prototype. It was found that the I-V curve closely resembles simulated I-V prediction, and measured maximum output power varies by only 1.34%. PMID:24453823

  11. A systematic method of interconnection optimization for dense-array concentrator photovoltaic system.

    PubMed

    Siaw, Fei-Lu; Chong, Kok-Keong

    2013-01-01

    This paper presents a new systematic approach to analyze all possible array configurations in order to determine the most optimal dense-array configuration for concentrator photovoltaic (CPV) systems. The proposed method is fast, simple, reasonably accurate, and very useful as a preliminary study before constructing a dense-array CPV panel. Using measured flux distribution data, each CPV cells' voltage and current values at three critical points which are at short-circuit, open-circuit, and maximum power point are determined. From there, an algorithm groups the cells into basic modules. The next step is I-V curve prediction, to find the maximum output power of each array configuration. As a case study, twenty different I-V predictions are made for a prototype of nonimaging planar concentrator, and the array configuration that yields the highest output power is determined. The result is then verified by assembling and testing of an actual dense-array on the prototype. It was found that the I-V curve closely resembles simulated I-V prediction, and measured maximum output power varies by only 1.34%. PMID:24453823

  12. Tuning Complex Computer Codes to Data and Optimal Designs

    NASA Astrophysics Data System (ADS)

    Park, Jeong Soo

    Modern scientific researchers often use complex computer simulation codes for theoretical investigations. We model the response of computer simulation code as the realization of a stochastic process. This approach, design and analysis of computer experiments (DACE), provides a statistical basis for analysing computer data, for designing experiments for efficient prediction and for comparing computer-encoded theory to experiments. An objective of research in a large class of dynamic systems is to determine any unknown coefficients in a theory. The coefficients can be determined by "tuning" the computer model to the real data so that the tuned code gives a good match to the real experimental data. Three design strategies for computer experiments are considered: data-adaptive sequential A-optimal design, maximum entropy design and optimal Latin-hypercube design. The following "code tuning" methodologies are proposed: nonlinear least squares, joint MLE, "separated" joint MLE and Bayesian method. The performance of these methods have been studied in several toy models. In the application to nuclear fusion devices, a cheaper emulator of the simulation code (BALDUR) has been constructed, and the transport coefficients were estimated from data of two tokamaks (ASDEX and PDX). Tuning complex computer codes to data using some statistical estimation methods and a cheap emulator of the code along with careful designs of computer experiments, with applications to nuclear fusion devices, is the topic of this thesis.

  13. A realistic model under which the genetic code is optimal.

    PubMed

    Buhrman, Harry; van der Gulik, Peter T S; Klau, Gunnar W; Schaffner, Christian; Speijer, Dave; Stougie, Leen

    2013-10-01

    The genetic code has a high level of error robustness. Using values of hydrophobicity scales as a proxy for amino acid character, and the mean square measure as a function quantifying error robustness, a value can be obtained for a genetic code which reflects the error robustness of that code. By comparing this value with a distribution of values belonging to codes generated by random permutations of amino acid assignments, the level of error robustness of a genetic code can be quantified. We present a calculation in which the standard genetic code is shown to be optimal. We obtain this result by (1) using recently updated values of polar requirement as input; (2) fixing seven assignments (Ile, Trp, His, Phe, Tyr, Arg, and Leu) based on aptamer considerations; and (3) using known biosynthetic relations of the 20 amino acids. This last point is reflected in an approach of subdivision (restricting the random reallocation of assignments to amino acid subgroups, the set of 20 being divided in four such subgroups). The three approaches to explain robustness of the code (specific selection for robustness, amino acid-RNA interactions leading to assignments, or a slow growth process of assignment patterns) are reexamined in light of our findings. We offer a comprehensive hypothesis, stressing the importance of biosynthetic relations, with the code evolving from an early stage with just glycine and alanine, via intermediate stages, towards 64 codons carrying todays meaning.

  14. The optimal code searching method with an improved criterion of coded exposure for remote sensing image restoration

    NASA Astrophysics Data System (ADS)

    He, Lirong; Cui, Guangmang; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting

    2015-03-01

    Coded exposure photography makes the motion de-blurring a well-posed problem. The integration pattern of light is modulated using the method of coded exposure by opening and closing the shutter within the exposure time, changing the traditional shutter frequency spectrum into a wider frequency band in order to preserve more image information in frequency domain. The searching method of optimal code is significant for coded exposure. In this paper, an improved criterion of the optimal code searching is proposed by analyzing relationship between code length and the number of ones in the code, considering the noise effect on code selection with the affine noise model. Then the optimal code is obtained utilizing the method of genetic searching algorithm based on the proposed selection criterion. Experimental results show that the time consuming of searching optimal code decreases with the presented method. The restoration image is obtained with better subjective experience and superior objective evaluation values.

  15. Optimized design and research of secondary microprism for dense array concentrating photovoltaic module

    NASA Astrophysics Data System (ADS)

    Yang, Guanghui; Chen, Bingzhen; Liu, Youqiang; Guo, Limin; Yao, Shun; Wang, Zhiyong

    2015-10-01

    As the critical component of concentrating photovoltaic module, secondary concentrators can be effective in increasing the acceptance angle and incident light, as well as improving the energy uniformity of focal spots. This paper presents a design of transmission-type secondary microprism for dense array concentrating photovoltaic module. The 3-D model of this design is established by Solidworks and important parameters such as inclination angle and component height are optimized using Zemax. According to the design and simulation results, several secondary microprisms with different parameters are fabricated and tested in combination with Fresnel lens and multi-junction solar cell. The sun-simulator IV test results show that the combination has the highest output power when secondary microprism height is 5mm and top facet side length is 7mm. Compared with the case without secondary microprism, the output power can improve 11% after the employment of secondary microprisms, indicating the indispensability of secondary microprisms in concentrating photovoltaic module.

  16. Lossless coding using predictors and VLCs optimized for each image

    NASA Astrophysics Data System (ADS)

    Matsuda, Ichiro; Shirai, Noriyuki; Itoh, Susumu

    2003-06-01

    This paper proposes an efficient lossless coding scheme for still images. The scheme utilizes an adaptive prediction technique where a set of linear predictors are designed for a given image and an appropriate predictor is selected from the set block-by-block. The resulting prediction errors are encoded using context-adaptive variable-length codes (VLCs). Context modeling, or adaptive selection of VLCs, is carried out pel-by-pel and the VLC assigned to each context is designed on a probability distribution model of the prediction errors. In order to improve coding efficiency, a generalized Gaussian function is used as the model for each context. Moreover, not only the predictors but also parameters of the probability distribution models are iteratively optimized for each image so that a coding rate of the prediction errors can have a minimum. Experimental results show that the proposed coding scheme attains comparable coding performance to the state-of-the-art TMW scheme with much lower complexity in the decoding process.

  17. Optimal control of coupled PDE networks with automated code generation

    NASA Astrophysics Data System (ADS)

    Papadopoulos, D.

    2012-09-01

    The purpose of this work is to present a framework for the optimal control of coupled PDE networks. A coupled PDE network is a system of partial differential equations coupled together. Such systems can be represented as a directed graph. A domain specific language (DSL)—an extension of the DOT language—is used for the description of such a coupled PDE network. The adjoint equations and the gradient, required for its optimal control, are computed with the help of a computer algebra system (CAS). Automated code generation techniques have been used for the generation of the PDE systems of both the direct and the adjoint equations. Both the direct and adjoint equations are solved with the standard finite element method. Finally, for the numerical optimization of the system standard optimization techniques are used such as BFGS and Newton conjugate gradient.

  18. Effective squeezing enhancement via measurement-induced non-Gaussian operation and its application to the dense coding scheme

    SciTech Connect

    Kitagawa, Akira; Takeoka, Masahiro; Sasaki, Masahide; Wakui, Kentaro

    2005-08-15

    We study the measurement-induced non-Gaussian operation on the single- and two-mode Gaussian squeezed vacuum states with beam splitters and on-off type photon detectors, with which mixed non-Gaussian states are generally obtained in the conditional process. It is known that the entanglement can be enhanced via this non-Gaussian operation on the two-mode squeezed vacuum state. We show that, in the range of practical squeezing parameters, the conditional outputs are still close to Gaussian states, but their second order variances of quantum fluctuations and correlations are effectively suppressed and enhanced, respectively. To investigate an operational meaning of these states, especially entangled states, we also evaluate the quantum dense coding scheme from the viewpoint of the mutual information, and we show that non-Gaussian entangled state can be advantageous compared with the original two-mode squeezed state.

  19. Modeling Warm Dense Matter Experiments using the 3D ALE-AMR Code and the Move Toward Exascale Computing

    SciTech Connect

    Koniges, A; Eder, E; Liu, W; Barnard, J; Friedman, A; Logan, G; Fisher, A; Masers, N; Bertozzi, A

    2011-11-04

    The Neutralized Drift Compression Experiment II (NDCX II) is an induction accelerator planned for initial commissioning in 2012. The final design calls for a 3 MeV, Li+ ion beam, delivered in a bunch with characteristic pulse duration of 1 ns, and transverse dimension of order 1 mm. The NDCX II will be used in studies of material in the warm dense matter (WDM) regime, and ion beam/hydrodynamic coupling experiments relevant to heavy ion based inertial fusion energy. We discuss recent efforts to adapt the 3D ALE-AMR code to model WDM experiments on NDCX II. The code, which combines Arbitrary Lagrangian Eulerian (ALE) hydrodynamics with Adaptive Mesh Refinement (AMR), has physics models that include ion deposition, radiation hydrodynamics, thermal diffusion, anisotropic material strength with material time history, and advanced models for fragmentation. Experiments at NDCX-II will explore the process of bubble and droplet formation (two-phase expansion) of superheated metal solids using ion beams. Experiments at higher temperatures will explore equation of state and heavy ion fusion beam-to-target energy coupling efficiency. Ion beams allow precise control of local beam energy deposition providing uniform volumetric heating on a timescale shorter than that of hydrodynamic expansion. The ALE-AMR code does not have any export control restrictions and is currently running at the National Energy Research Scientific Computing Center (NERSC) at LBNL and has been shown to scale well to thousands of CPUs. New surface tension models that are being implemented and applied to WDM experiments. Some of the approaches use a diffuse interface surface tension model that is based on the advective Cahn-Hilliard equations, which allows for droplet breakup in divergent velocity fields without the need for imposed perturbations. Other methods require seeding or other methods for droplet breakup. We also briefly discuss the effects of the move to exascale computing and related

  20. Constellation labeling optimization for bit-interleaved coded APSK

    NASA Astrophysics Data System (ADS)

    Xiang, Xingyu; Mo, Zijian; Wang, Zhonghai; Pham, Khanh; Blasch, Erik; Chen, Genshe

    2016-05-01

    This paper investigates the constellation and mapping optimization for amplitude phase shift keying (APSK) modulation, which is deployed in Digital Video Broadcasting Satellite - Second Generation (DVB-S2) and Digital Video Broadcasting - Satellite services to Handhelds (DVB-SH) broadcasting standards due to its merits of power and spectral efficiency together with the robustness against nonlinear distortion. The mapping optimization is performed for 32-APSK according to combined cost functions related to Euclidean distance and mutual information. A Binary switching algorithm and its modified version are used to minimize the cost function and the estimated error between the original and received data. The optimized constellation mapping is tested by combining DVB-S2 standard Low-Density Parity-Check (LDPC) codes in both Bit-Interleaved Coded Modulation (BICM) and BICM with iterative decoding (BICM-ID) systems. The simulated results validate the proposed constellation labeling optimization scheme which yields better performance against conventional 32-APSK constellation defined in DVB-S2 standard.

  1. Optimization of Russian roulette parameters for the KENO computer code

    SciTech Connect

    Hoffman, T.J.

    1982-10-01

    Proper specification of the (statistical) weight standards for Monte Carlo calculations can lead to a substantial reduction in computer time. Frequently these weights are set intuitively. When optimization is performed, it is usually based on a simplified model (to enable mathematical analysis) and involves minimization of the sample variance. In this report, weight standards are optimized through consideration of the actual implementation of Russian roulette in the KENO computer code. The goal is minimization of computer time rather than minimization of sample variance. Verification of the development and assumptions is obtained from Monte Carlo simulations. The results indicate that the current default weight standards are appropriate for most problems in which thermal neutron transport is not a major consumer of computer time. For thermal systems, the optimization technique described in this report should be used.

  2. The microstructures of cold dense systems as informed by hard sphere models and optimal packings

    NASA Astrophysics Data System (ADS)

    Hopkins, Adam Bayne

    Sphere packings, or arrangements of "billiard balls" of various sizes that never overlap, are especially informative and broadly applicable models. In particular, a hard sphere model describes the important foundational case where potential energy due to attractive and repulsive forces is not present, meaning that entropy dominates the system's free energy. Sphere packings have been widely employed in chemistry, materials science, physics and biology to model a vast range of materials including concrete, rocket fuel, proteins, liquids and solid metals, to name but a few. Despite their richness and broad applicability, many questions about fundamental sphere packings remain unanswered. For example, what are the densest packings of identical three-dimensional spheres within certain defined containers? What are the densest packings of binary spheres (spheres of two different sizes) in three-dimensional Euclidean space R3 ? The answers to these two questions are important in condensed matter physics and solid-state chemistry. The former is important to the theory of nucleation in supercooled liquids and the latter in terms of studying the structure and stability of atomic and molecular alloys. The answers to both questions are useful when studying the targeted self-assembly of colloidal nanostructures. In this dissertation, putatively optimal answers to both of these questions are provided, and the applications of these findings are discussed. The methods developed to provide these answers, novel algorithms combining sequential linear and nonlinear programming techniques with targeted stochastic searches of conguration space, are also discussed. In addition, connections between the realizability of pair correlation functions and optimal sphere packings are studied, and mathematical proofs are presented concerning the characteristics of both locally and globally maximally dense structures in arbitrary dimension d. Finally, surprising and unexpected findings are

  3. Optimal bounds for parity-oblivious random access codes

    NASA Astrophysics Data System (ADS)

    Chailloux, André; Kerenidis, Iordanis; Kundu, Srijita; Sikora, Jamie

    2016-04-01

    Random access coding is an information task that has been extensively studied and found many applications in quantum information. In this scenario, Alice receives an n-bit string x, and wishes to encode x into a quantum state {ρ }x, such that Bob, when receiving the state {ρ }x, can choose any bit i\\in [n] and recover the input bit x i with high probability. Here we study two variants: parity-oblivious random access codes (RACs), where we impose the cryptographic property that Bob cannot infer any information about the parity of any subset of bits of the input apart from the single bits x i ; and even-parity-oblivious RACs, where Bob cannot infer any information about the parity of any even-size subset of bits of the input. In this paper, we provide the optimal bounds for parity-oblivious quantum RACs and show that they are asymptotically better than the optimal classical ones. Our results provide a large non-contextuality inequality violation and resolve the main open problem in a work of Spekkens et al (2009 Phys. Rev. Lett. 102 010401). Second, we provide the optimal bounds for even-parity-oblivious RACs by proving their equivalence to a non-local game and by providing tight bounds for the success probability of the non-local game via semidefinite programming. In the case of even-parity-oblivious RACs, the cryptographic property holds also in the device independent model.

  4. Efficient sensory cortical coding optimizes pursuit eye movements

    PubMed Central

    Liu, Bing; Macellaio, Matthew V.; Osborne, Leslie C.

    2016-01-01

    In the natural world, the statistics of sensory stimuli fluctuate across a wide range. In theory, the brain could maximize information recovery if sensory neurons adaptively rescale their sensitivity to the current range of inputs. Such adaptive coding has been observed in a variety of systems, but the premise that adaptation optimizes behaviour has not been tested. Here we show that adaptation in cortical sensory neurons maximizes information about visual motion in pursuit eye movements guided by that cortical activity. We find that gain adaptation drives a rapid (<100 ms) recovery of information after shifts in motion variance, because the neurons and behaviour rescale their sensitivity to motion fluctuations. Both neurons and pursuit rapidly adopt a response gain that maximizes motion information and minimizes tracking errors. Thus, efficient sensory coding is not simply an ideal standard but a description of real sensory computation that manifests in improved behavioural performance. PMID:27611214

  5. Efficient sensory cortical coding optimizes pursuit eye movements.

    PubMed

    Liu, Bing; Macellaio, Matthew V; Osborne, Leslie C

    2016-01-01

    In the natural world, the statistics of sensory stimuli fluctuate across a wide range. In theory, the brain could maximize information recovery if sensory neurons adaptively rescale their sensitivity to the current range of inputs. Such adaptive coding has been observed in a variety of systems, but the premise that adaptation optimizes behaviour has not been tested. Here we show that adaptation in cortical sensory neurons maximizes information about visual motion in pursuit eye movements guided by that cortical activity. We find that gain adaptation drives a rapid (<100 ms) recovery of information after shifts in motion variance, because the neurons and behaviour rescale their sensitivity to motion fluctuations. Both neurons and pursuit rapidly adopt a response gain that maximizes motion information and minimizes tracking errors. Thus, efficient sensory coding is not simply an ideal standard but a description of real sensory computation that manifests in improved behavioural performance. PMID:27611214

  6. Investigation of Navier-Stokes code verification and design optimization

    NASA Astrophysics Data System (ADS)

    Vaidyanathan, Rajkumar

    With rapid progress made in employing computational techniques for various complex Navier-Stokes fluid flow problems, design optimization problems traditionally based on empirical formulations and experiments are now being addressed with the aid of computational fluid dynamics (CFD). To be able to carry out an effective CFD-based optimization study, it is essential that the uncertainty and appropriate confidence limits of the CFD solutions be quantified over the chosen design space. The present dissertation investigates the issues related to code verification, surrogate model-based optimization and sensitivity evaluation. For Navier-Stokes (NS) CFD code verification a least square extrapolation (LSE) method is assessed. This method projects numerically computed NS solutions from multiple, coarser base grids onto a finer grid and improves solution accuracy by minimizing the residual of the discretized NS equations over the projected grid. In this dissertation, the finite volume (FV) formulation is focused on. The interplay between the concepts and the outcome of LSE, and the effects of solution gradients and singularities, nonlinear physics, and coupling of flow variables on the effectiveness of LSE are investigated. A CFD-based design optimization of a single element liquid rocket injector is conducted with surrogate models developed using response surface methodology (RSM) based on CFD solutions. The computational model consists of the NS equations, finite rate chemistry, and the k-epsilonturbulence closure. With the aid of these surrogate models, sensitivity and trade-off analyses are carried out for the injector design whose geometry (hydrogen flow angle, hydrogen and oxygen flow areas and oxygen post tip thickness) is optimized to attain desirable goals in performance (combustion length) and life/survivability (the maximum temperatures on the oxidizer post tip and injector face and a combustion chamber wall temperature). A preliminary multi

  7. Investigation of Navier-Stokes Code Verification and Design Optimization

    NASA Technical Reports Server (NTRS)

    Vaidyanathan, Rajkumar

    2004-01-01

    With rapid progress made in employing computational techniques for various complex Navier-Stokes fluid flow problems, design optimization problems traditionally based on empirical formulations and experiments are now being addressed with the aid of computational fluid dynamics (CFD). To be able to carry out an effective CFD-based optimization study, it is essential that the uncertainty and appropriate confidence limits of the CFD solutions be quantified over the chosen design space. The present dissertation investigates the issues related to code verification, surrogate model-based optimization and sensitivity evaluation. For Navier-Stokes (NS) CFD code verification a least square extrapolation (LSE) method is assessed. This method projects numerically computed NS solutions from multiple, coarser base grids onto a freer grid and improves solution accuracy by minimizing the residual of the discretized NS equations over the projected grid. In this dissertation, the finite volume (FV) formulation is focused on. The interplay between the xi concepts and the outcome of LSE, and the effects of solution gradients and singularities, nonlinear physics, and coupling of flow variables on the effectiveness of LSE are investigated. A CFD-based design optimization of a single element liquid rocket injector is conducted with surrogate models developed using response surface methodology (RSM) based on CFD solutions. The computational model consists of the NS equations, finite rate chemistry, and the k-6 turbulence closure. With the aid of these surrogate models, sensitivity and trade-off analyses are carried out for the injector design whose geometry (hydrogen flow angle, hydrogen and oxygen flow areas and oxygen post tip thickness) is optimized to attain desirable goals in performance (combustion length) and life/survivability (the maximum temperatures on the oxidizer post tip and injector face and a combustion chamber wall temperature). A preliminary multi-objective optimization

  8. Recent developments in DYNSUB: New models, code optimization and parallelization

    SciTech Connect

    Daeubler, M.; Trost, N.; Jimenez, J.; Sanchez, V.

    2013-07-01

    DYNSUB is a high-fidelity coupled code system consisting of the reactor simulator DYN3D and the sub-channel code SUBCHANFLOW. It describes nuclear reactor core behavior with pin-by-pin resolution for both steady-state and transient scenarios. In the course of the coupled code system's active development, super-homogenization (SPH) and generalized equivalence theory (GET) discontinuity factors may be computed with and employed in DYNSUB to compensate pin level homogenization errors. Because of the largely increased numerical problem size for pin-by-pin simulations, DYNSUB has bene fitted from HPC techniques to improve its numerical performance. DYNSUB's coupling scheme has been structurally revised. Computational bottlenecks have been identified and parallelized for shared memory systems using OpenMP. Comparing the elapsed time for simulating a PWR core with one-eighth symmetry under hot zero power conditions applying the original and the optimized DYNSUB using 8 cores, overall speed up factors greater than 10 have been observed. The corresponding reduction in execution time enables a routine application of DYNSUB to study pin level safety parameters for engineering sized cases in a scientific environment. (authors)

  9. Neutron Activation Analysis PRognosis and Optimization Code System.

    2004-08-20

    Version 00 NAAPRO predicts the results and main characteristics (detection limits, determination limits, measurement limits and relative precision of the analysis) of neutron activation analysis (instrumental and radiochemical). Gamma-ray dose rates for different points of time after sample irradiation and input count rate of the spectrometry system are also predicted. The code uses standard Windows user interface and extensive graphical tools for the visualization of the spectrometer characteristics (efficiency, response and background) and simulated spectrum.more » Optimization part is not included in the current version of the code. This release is designated NAAPRO, Version 01.beta. The MCNP code was used for generating detector responses. PREPRO-2000 and FCONV programs were used at the preparation of the program nuclear databases. A special program was developed for viewing, editing and updating of the program databases (not included into the present program package). The MCNP, PREPRO-2000 and FCONV software packages are not included in the NAAPRO package.« less

  10. Variational Perturbation Theory Path Integral Monte Carlo (VPT-PIMC): Trial Path Optimization Approach for Warm Dense Matter

    NASA Astrophysics Data System (ADS)

    Belof, Jonathan; Dubois, Jonathan

    2013-06-01

    Warm dense matter (WDM), the regime of degenerate and strongly coupled Coulomb systems, is of great interest due to it's importance in understanding astrophysical processes and high energy density laboratory experiments. Path Integral Monte Carlo (PIMC) presents a particularly attractive formalism for tackling outstanding questions in WDM, in that electron correlation can be calculated exactly, with the nuclear and electronic degrees of freedom on equal footing. Here we present an efficient means of solving the Feynman path integral numerically by variational optimization of a trial density matrix, a method originally proposed for simple potentials by Feynman and Kleinert, and we show that this formalism provides an accurate description of warm dense matter with a number of unique advantages over other PIMC approaches. An exchange interaction term is derived for the variationally optimized path, as well as a numerically efficient scheme for dealing with long-range electrostatics. Finally, we present results for the pair correlation functions and thermodynamic observables of the spin polarized electron gas, warm dense hydrogen and all-electron warm dense carbon within the presented VPT-PIMC formalism. Lawrence Livermore National Laboratory is operated by Lawrence Livermore National Security, LLC, for the U.S. Department of Energy, National Nuclear Security Administration under Contract DE-AC52-07NA27344.

  11. Iterative Phase Optimization of Elementary Quantum Error Correcting Codes

    NASA Astrophysics Data System (ADS)

    Müller, M.; Rivas, A.; Martínez, E. A.; Nigg, D.; Schindler, P.; Monz, T.; Blatt, R.; Martin-Delgado, M. A.

    2016-07-01

    Performing experiments on small-scale quantum computers is certainly a challenging endeavor. Many parameters need to be optimized to achieve high-fidelity operations. This can be done efficiently for operations acting on single qubits, as errors can be fully characterized. For multiqubit operations, though, this is no longer the case, as in the most general case, analyzing the effect of the operation on the system requires a full state tomography for which resources scale exponentially with the system size. Furthermore, in recent experiments, additional electronic levels beyond the two-level system encoding the qubit have been used to enhance the capabilities of quantum-information processors, which additionally increases the number of parameters that need to be controlled. For the optimization of the experimental system for a given task (e.g., a quantum algorithm), one has to find a satisfactory error model and also efficient observables to estimate the parameters of the model. In this manuscript, we demonstrate a method to optimize the encoding procedure for a small quantum error correction code in the presence of unknown but constant phase shifts. The method, which we implement here on a small-scale linear ion-trap quantum computer, is readily applicable to other AMO platforms for quantum-information processing.

  12. Image-Guided Non-Local Dense Matching with Three-Steps Optimization

    NASA Astrophysics Data System (ADS)

    Huang, Xu; Zhang, Yongjun; Yue, Zhaoxi

    2016-06-01

    This paper introduces a new image-guided non-local dense matching algorithm that focuses on how to solve the following problems: 1) mitigating the influence of vertical parallax to the cost computation in stereo pairs; 2) guaranteeing the performance of dense matching in homogeneous intensity regions with significant disparity changes; 3) limiting the inaccurate cost propagated from depth discontinuity regions; 4) guaranteeing that the path between two pixels in the same region is connected; and 5) defining the cost propagation function between the reliable pixel and the unreliable pixel during disparity interpolation. This paper combines the Census histogram and an improved histogram of oriented gradient (HOG) feature together as the cost metrics, which are then aggregated based on a new iterative non-local matching method and the semi-global matching method. Finally, new rules of cost propagation between the valid pixels and the invalid pixels are defined to improve the disparity interpolation results. The results of our experiments using the benchmarks and the Toronto aerial images from the International Society for Photogrammetry and Remote Sensing (ISPRS) show that the proposed new method can outperform most of the current state-of-the-art stereo dense matching methods.

  13. Pressure distribution based optimization of phase-coded acoustical vortices

    SciTech Connect

    Zheng, Haixiang; Gao, Lu; Dai, Yafei; Ma, Qingyu; Zhang, Dong

    2014-02-28

    Based on the acoustic radiation of point source, the physical mechanism of phase-coded acoustical vortices is investigated with formulae derivations of acoustic pressure and vibration velocity. Various factors that affect the optimization of acoustical vortices are analyzed. Numerical simulations of the axial, radial, and circular pressure distributions are performed with different source numbers, frequencies, and axial distances. The results prove that the acoustic pressure of acoustical vortices is linearly proportional to the source number, and lower fluctuations of circular pressure distributions can be produced for more sources. With the increase of source frequency, the acoustic pressure of acoustical vortices increases accordingly with decreased vortex radius. Meanwhile, increased vortex radius with reduced acoustic pressure is also achieved for longer axial distance. With the 6-source experimental system, circular and radial pressure distributions at various frequencies and axial distances have been measured, which have good agreements with the results of numerical simulations. The favorable results of acoustic pressure distributions provide theoretical basis for further studies of acoustical vortices.

  14. Hydrodynamic Optimization Method and Design Code for Stall-Regulated Hydrokinetic Turbine Rotors

    SciTech Connect

    Sale, D.; Jonkman, J.; Musial, W.

    2009-08-01

    This report describes the adaptation of a wind turbine performance code for use in the development of a general use design code and optimization method for stall-regulated horizontal-axis hydrokinetic turbine rotors. This rotor optimization code couples a modern genetic algorithm and blade-element momentum performance code in a user-friendly graphical user interface (GUI) that allows for rapid and intuitive design of optimal stall-regulated rotors. This optimization method calculates the optimal chord, twist, and hydrofoil distributions which maximize the hydrodynamic efficiency and ensure that the rotor produces an ideal power curve and avoids cavitation. Optimizing a rotor for maximum efficiency does not necessarily create a turbine with the lowest cost of energy, but maximizing the efficiency is an excellent criterion to use as a first pass in the design process. To test the capabilities of this optimization method, two conceptual rotors were designed which successfully met the design objectives.

  15. Resource allocation for error resilient video coding over AWGN using optimization approach.

    PubMed

    An, Cheolhong; Nguyen, Truong Q

    2008-12-01

    The number of slices for error resilient video coding is jointly optimized with 802.11a-like media access control and the physical layers with automatic repeat request and rate compatible punctured convolutional code over additive white gaussian noise channel as well as channel times allocation for time division multiple access. For error resilient video coding, the relation between the number of slices and coding efficiency is analyzed and formulated as a mathematical model. It is applied for the joint optimization problem, and the problem is solved by a convex optimization method such as the primal-dual decomposition method. We compare the performance of a video communication system which uses the optimal number of slices with one that codes a picture as one slice. From numerical examples, end-to-end distortion of utility functions can be significantly reduced with the optimal slices of a picture especially at low signal-to-noise ratio.

  16. Optimal Near-Hitless Network Failure Recovery Using Diversity Coding

    ERIC Educational Resources Information Center

    Avci, Serhat Nazim

    2013-01-01

    Link failures in wide area networks are common and cause significant data losses. Mesh-based protection schemes offer high capacity efficiency but they are slow, require complex signaling, and instable. Diversity coding is a proactive coding-based recovery technique which offers near-hitless (sub-ms) restoration with a competitive spare capacity…

  17. Optimized atom position and coefficient coding for matching pursuit-based image compression.

    PubMed

    Shoa, Alireza; Shirani, Shahram

    2009-12-01

    In this paper, we propose a new encoding algorithm for matching pursuit image coding. We show that coding performance is improved when correlations between atom positions and atom coefficients are both used in encoding. We find the optimum tradeoff between efficient atom position coding and efficient atom coefficient coding and optimize the encoder parameters. Our proposed algorithm outperforms the existing coding algorithms designed for matching pursuit image coding. Additionally, we show that our algorithm results in better rate distortion performance than JPEG 2000 at low bit rates.

  18. Efficacy of Code Optimization on Cache-based Processors

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    The current common wisdom in the U.S. is that the powerful, cost-effective supercomputers of tomorrow will be based on commodity (RISC) micro-processors with cache memories. Already, most distributed systems in the world use such hardware as building blocks. This shift away from vector supercomputers and towards cache-based systems has brought about a change in programming paradigm, even when ignoring issues of parallelism. Vector machines require inner-loop independence and regular, non-pathological memory strides (usually this means: non-power-of-two strides) to allow efficient vectorization of array operations. Cache-based systems require spatial and temporal locality of data, so that data once read from main memory and stored in high-speed cache memory is used optimally before being written back to main memory. This means that the most cache-friendly array operations are those that feature zero or unit stride, so that each unit of data read from main memory (a cache line) contains information for the next iteration in the loop. Moreover, loops ought to be 'fat', meaning that as many operations as possible are performed on cache data-provided instruction caches do not overflow and enough registers are available. If unit stride is not possible, for example because of some data dependency, then care must be taken to avoid pathological strides, just ads on vector computers. For cache-based systems the issues are more complex, due to the effects of associativity and of non-unit block (cache line) size. But there is more to the story. Most modern micro-processors are superscalar, which means that they can issue several (arithmetic) instructions per clock cycle, provided that there are enough independent instructions in the loop body. This is another argument for providing fat loop bodies. With these restrictions, it appears fairly straightforward to produce code that will run efficiently on any cache-based system. It can be argued that although some of the important

  19. Rate distortion optimization for H.264 interframe coding: a general framework and algorithms.

    PubMed

    Yang, En-Hui; Yu, Xiang

    2007-07-01

    Rate distortion (RD) optimization for H.264 interframe coding with complete baseline decoding compatibility is investigated on a frame basis. Using soft decision quantization (SDQ) rather than the standard hard decision quantization, we first establish a general framework in which motion estimation, quantization, and entropy coding (in H.264) for the current frame can be jointly designed to minimize a true RD cost given previously coded reference frames. We then propose three RD optimization algorithms--a graph-based algorithm for near optimal SDQ in H.264 baseline encoding given motion estimation and quantization step sizes, an algorithm for near optimal residual coding in H.264 baseline encoding given motion estimation, and an iterative overall algorithm to optimize H.264 baseline encoding for each individual frame given previously coded reference frames-with them embedded in the indicated order. The graph-based algorithm for near optimal SDQ is the core; given motion estimation and quantization step sizes, it is guaranteed to perform optimal SDQ if the weak adjacent block dependency utilized in the context adaptive variable length coding of H.264 is ignored for optimization. The proposed algorithms have been implemented based on the reference encoder JM82 of H.264 with complete compatibility to the baseline profile. Experiments show that for a set of typical video testing sequences, the graph-based algorithm for near optimal SDQ, the algorithm for near optimal residual coding, and the overall algorithm achieve on average, 6%, 8%, and 12%, respectively, rate reduction at the same PSNR (ranging from 30 to 38 dB) when compared with the RD optimization method implemented in the H.264 reference software.

  20. Experimental qualification of a code for optimizing gamma irradiation facilities

    NASA Astrophysics Data System (ADS)

    Mosse, D. C.; Leizier, J. J. M.; Keraron, Y.; Lallemant, T. F.; Perdriau, P. D. M.

    Dose computation codes are a prerequisite for the design of gamma irradiation facilities. Code quality is a basic factor in the achievement of sound economic and technical performance by the facility. This paper covers the validation of a code by reference dosimetry experiments. Developed by the "Société Générale pour les Techniques Nouvelles" (SGN), a supplier of irradiation facilities and member of the CEA Group, the code is currently used by that company. (ERHART, KERARON, 1986) Experimental data were obtained under conditions representative of those prevailing in the gamma irradiation of foodstuffs. Irradiation was performed in POSEIDON, a Cobalt 60 cell of ORIS-I. Several Cobalt 60 rods of known activity are arranged in a planar array typical of industrial irradiation facilities. Pallet density is uniform, ranging from 0 (air) to 0.6. Reference dosimetry measurements were performed by the "Laboratoire de Métrologie des Rayonnements Ionisants" (LMRI) of the "Bureau National de Métrologie" (BNM). The procedure is based on the positioning of more than 300 ESR/alanine dosemeters throughout the various target volumes used. The reference quantity was the absorbed dose in water. The code was validated by a comparison of experimental and computed data. It has proved to be an effective tool for the design of facilities meeting the specific requirements applicable to foodstuff irradiation, which are frequently found difficult to meet.

  1. Source-channel optimized trellis codes for bitonal image transmission over AWGN channels.

    PubMed

    Kroll, J M; Phamdo, N

    1999-01-01

    We consider the design of trellis codes for transmission of binary images over additive white Gaussian noise (AWGN) channels. We first model the image as a binary asymmetric Markov source (BAMS) and then design source-channel optimized (SCO) trellis codes for the BAMS and AWGN channel. The SCO codes are shown to be superior to Ungerboeck's codes by approximately 1.1 dB (64-state code, 10(-5) bit error probability), We also show that a simple "mapping conversion" method can be used to improve the performance of Ungerboeck's codes by approximately 0.4 dB (also 64-state code and 10 (-5) bit error probability). We compare the proposed SCO system with a traditional tandem system consisting of a Huffman code, a convolutional code, an interleaver, and an Ungerboeck trellis code. The SCO system significantly outperforms the tandem system. Finally, using a facsimile image, we compare the image quality of an SCO code, an Ungerboeck code, and the tandem code, The SCO code yields the best reconstructed image quality at 4-5 dB channel SNR.

  2. Optimization of Ambient Noise Cross-Correlation Imaging Across Large Dense Array

    NASA Astrophysics Data System (ADS)

    Sufri, O.; Xie, Y.; Lin, F. C.; Song, W.

    2015-12-01

    Ambient Noise Tomography is currently one of the most studied topics of seismology. It gives possibility of studying physical properties of rocks from the depths of subsurface to the upper mantle depths using recorded noise sources. A network of new seismic sensors, which are capable of recording continuous seismic noise and doing the processing at the same time on-site, could help to assess possible risk of volcanic activity on a volcano and help to understand the changes in physical properties of a fault before and after an earthquake occurs. This new seismic sensor technology could also be used in oil and gas industry to figure out depletion rate of a reservoir and help to improve velocity models for obtaining better seismic reflection cross-sections. Our recent NSF funded project is bringing seismologists, signal processors, and computer scientists together to develop a new ambient noise seismic imaging system which could record continuous seismic noise and process it on-site and send Green's functions and/or tomography images to the network. Such an imaging system requires optimum amount of sensors, sensor communication, and processing of the recorded data. In order to solve these problems, we first started working on the problem of optimum amount of sensors and the communication between these sensors by using small aperture dense network called Sweetwater Array, deployed by Nodal Seismic in 2014. We downloaded ~17 day of continuous data from 2268 one-component stations between March 30-April 16 2015 from IRIS DMC and performed cross-correlation to determine the lag times between station pairs. The lag times were then entered in matrix form. Our goal is to selecting random lag time values in the matrix and assuming all other elements of the matrix either missing or unknown and performing matrix completion technique to find out how close the results from matrix completion technique would be close to the real calculated values. This would give us better idea

  3. Quantum circuit for optimal eavesdropping in quantum key distribution using phase-time coding

    SciTech Connect

    Kronberg, D. A.; Molotkov, S. N.

    2010-07-15

    A quantum circuit is constructed for optimal eavesdropping on quantum key distribution proto- cols using phase-time coding, and its physical implementation based on linear and nonlinear fiber-optic components is proposed.

  4. Simultaneous optimization of dense non-aqueous phase liquid (DNAPL) source and contaminant plume remediation.

    PubMed

    Mayer, Alex; Endres, Karen L

    2007-05-14

    A framework is developed for simultaneous, optimal design of groundwater contaminant source removal and plume remediation strategies. The framework allows for varying degrees of effort and cost to be dedicated to source removal versus plume remediation. We have accounted for the presence of physical heterogeneity in the DNAPL source, since source heterogeneity controls mass release into the plume and the efficiency of source removal efforts. We considered high and low estimates of capital and operating costs for chemical flushing removal of the source, since these are expected to vary form site to site. Using the lower chemical flushing cost estimates, it is found that the optimal allocation of funds to source removal or plume remediation is sensitive to the degree of heterogeneity in the source. When the time elapsed between the source release and the implementation of remediation was varied, it was found that, except for the longest elapsed time (50,000 days), a combination of partial source removal and plume remediation was most efficient. When first-order, dissolved contaminant degradation was allowed, source removal was found to be unnecessary for the cases where the degradation rate exceeded intermediate values of the first-order rate constant. Finally, it was found that source removal became more necessary as the degree of aquifer heterogeneity increased.

  5. Optimal pyramidal and subband decompositions for hierarchical coding of noisy and quantized images.

    PubMed

    Gerassimos Strintzis, M

    1998-01-01

    Optimal hierarchical coding is sought, for progressive or scalable image transmission, by minimizing the variance of the error difference between the original image and its lower resolution renditions. The optimal, according to the above criterion, pyramidal and subband image coders are determined for images subject to corruption by quantization or transmission noise. Given arbitrary analysis filters and assuming adequate knowledge of the noise statistics, optimal synthesis filters are found. The optimal analysis filters are subsequently determined, leading to formulas for globally optimal structures for pyramidal and subband image decompositions. Experimental results illustrate the implementation and performance of the optimal coders.

  6. On-line optimization code used at Saturne

    NASA Astrophysics Data System (ADS)

    Lagniel, J. M.; Lemaire, J. L.

    A computer code has been developped in order to make the tuning of the injection process easier in the Saturne synchrotron accelerator and search for sets of new values of parameters leading to the optimum of any criterion. The usual criterion being mainly the beam intensity given by current transformers or any non-destructive measurement device. Acquisition of the criterion is made at each cycle of the acceleration. The technique used has many advantages

  7. Emergence of optimal decoding of population codes through STDP.

    PubMed

    Habenschuss, Stefan; Puhr, Helmut; Maass, Wolfgang

    2013-06-01

    The brain faces the problem of inferring reliable hidden causes from large populations of noisy neurons, for example, the direction of a moving object from spikes in area MT. It is known that a theoretically optimal likelihood decoding could be carried out by simple linear readout neurons if weights of synaptic connections were set to certain values that depend on the tuning functions of sensory neurons. We show here that such theoretically optimal readout weights emerge autonomously through STDP in conjunction with lateral inhibition between readout neurons. In particular, we identify a class of optimal STDP learning rules with homeostatic plasticity, for which the autonomous emergence of optimal readouts can be explained on the basis of a rigorous learning theory. This theory shows that the network motif we consider approximates expectation-maximization for creating internal generative models for hidden causes of high-dimensional spike inputs. Notably, we find that this optimal functionality can be well approximated by a variety of STDP rules beyond those predicted by theory. Furthermore, we show that this learning process is very stable and automatically adjusts weights to changes in the number of readout neurons, the tuning functions of sensory neurons, and the statistics of external stimuli. PMID:23517096

  8. A new algorithm for optimizing the wavelength coverage for spectroscopic studies: Spectral Wavelength Optimization Code (SWOC)

    NASA Astrophysics Data System (ADS)

    Ruchti, G. R.; Feltzing, S.; Lind, K.; Caffau, E.; Korn, A. J.; Schnurr, O.; Hansen, C. J.; Koch, A.; Sbordone, L.; de Jong, R. S.

    2016-09-01

    The past decade and a half has seen the design and execution of several ground-based spectroscopic surveys, both Galactic and Extragalactic. Additionally, new surveys are being designed that extend the boundaries of current surveys. In this context, many important considerations must be done when designing a spectrograph for the future. Among these is the determination of the optimum wavelength coverage. In this work, we present a new code for determining the wavelength ranges that provide the optimal amount of information to achieve the required science goals for a given survey. In its first mode, it utilizes a user-defined list of spectral features to compute a figure-of-merit for different spectral configurations. The second mode utilizes a set of flux-calibrated spectra, determining the spectral regions that show the largest differences among the spectra. Our algorithm is easily adaptable for any set of science requirements and any spectrograph design. We apply the algorithm to several examples, including 4MOST, showing the method yields important design constraints to the wavelength regions.

  9. Wireless image transmission using turbo codes and optimal unequal error protection.

    PubMed

    Thomos, Nikolaos; Boulgouris, Nikolaos V; Strintzis, Michael G

    2005-11-01

    A novel image transmission scheme is proposed for the communication of set partitioning in hierarchical trees image streams over wireless channels. The proposed scheme employs turbo codes and Reed-Solomon codes in order to deal effectively with burst errors. An algorithm for the optimal unequal error protection of the compressed bitstream is also proposed and applied in conjunction with an inherently more efficient technique for product code decoding. The resulting scheme is tested for the transmission of images over wireless channels. Experimental evaluation clearly demonstrates the superiority of the proposed transmission system in comparison to well-known robust coding schemes.

  10. Joint optimization of run-length coding, Huffman coding, and quantization table with complete baseline JPEG decoder compatibility.

    PubMed

    Yang, En-hui; Wang, Longji

    2009-01-01

    To maximize rate distortion performance while remaining faithful to the JPEG syntax, the joint optimization of the Huffman tables, quantization step sizes, and DCT indices of a JPEG encoder is investigated. Given Huffman tables and quantization step sizes, an efficient graph-based algorithm is first proposed to find the optimal DCT indices in the form of run-size pairs. Based on this graph-based algorithm, an iterative algorithm is then presented to jointly optimize run-length coding, Huffman coding, and quantization table selection. The proposed iterative algorithm not only results in a compressed bitstream completely compatible with existing JPEG and MPEG decoders, but is also computationally efficient. Furthermore, when tested over standard test images, it achieves the best JPEG compression results, to the extent that its own JPEG compression performance even exceeds the quoted PSNR results of some state-of-the-art wavelet-based image coders such as Shapiro's embedded zerotree wavelet algorithm at the common bit rates under comparison. Both the graph-based algorithm and the iterative algorithm can be applied to application areas such as web image acceleration, digital camera image compression, MPEG frame optimization, and transcoding, etc. PMID:19095519

  11. Demonstration of Automatically-Generated Adjoint Code for Use in Aerodynamic Shape Optimization

    NASA Technical Reports Server (NTRS)

    Green, Lawrence; Carle, Alan; Fagan, Mike

    1999-01-01

    Gradient-based optimization requires accurate derivatives of the objective function and constraints. These gradients may have previously been obtained by manual differentiation of analysis codes, symbolic manipulators, finite-difference approximations, or existing automatic differentiation (AD) tools such as ADIFOR (Automatic Differentiation in FORTRAN). Each of these methods has certain deficiencies, particularly when applied to complex, coupled analyses with many design variables. Recently, a new AD tool called ADJIFOR (Automatic Adjoint Generation in FORTRAN), based upon ADIFOR, was developed and demonstrated. Whereas ADIFOR implements forward-mode (direct) differentiation throughout an analysis program to obtain exact derivatives via the chain rule of calculus, ADJIFOR implements the reverse-mode counterpart of the chain rule to obtain exact adjoint form derivatives from FORTRAN code. Automatically-generated adjoint versions of the widely-used CFL3D computational fluid dynamics (CFD) code and an algebraic wing grid generation code were obtained with just a few hours processing time using the ADJIFOR tool. The codes were verified for accuracy and were shown to compute the exact gradient of the wing lift-to-drag ratio, with respect to any number of shape parameters, in about the time required for 7 to 20 function evaluations. The codes have now been executed on various computers with typical memory and disk space for problems with up to 129 x 65 x 33 grid points, and for hundreds to thousands of independent variables. These adjoint codes are now used in a gradient-based aerodynamic shape optimization problem for a swept, tapered wing. For each design iteration, the optimization package constructs an approximate, linear optimization problem, based upon the current objective function, constraints, and gradient values. The optimizer subroutines are called within a design loop employing the approximate linear problem until an optimum shape is found, the design loop

  12. A novel multivariate performance optimization method based on sparse coding and hyper-predictor learning.

    PubMed

    Yang, Jiachen; Ding, Zhiyong; Guo, Fei; Wang, Huogen; Hughes, Nick

    2015-11-01

    In this paper, we investigate the problem of optimization of multivariate performance measures, and propose a novel algorithm for it. Different from traditional machine learning methods which optimize simple loss functions to learn prediction function, the problem studied in this paper is how to learn effective hyper-predictor for a tuple of data points, so that a complex loss function corresponding to a multivariate performance measure can be minimized. We propose to present the tuple of data points to a tuple of sparse codes via a dictionary, and then apply a linear function to compare a sparse code against a given candidate class label. To learn the dictionary, sparse codes, and parameter of the linear function, we propose a joint optimization problem. In this problem, the both the reconstruction error and sparsity of sparse code, and the upper bound of the complex loss function are minimized. Moreover, the upper bound of the loss function is approximated by the sparse codes and the linear function parameter. To optimize this problem, we develop an iterative algorithm based on descent gradient methods to learn the sparse codes and hyper-predictor parameter alternately. Experiment results on some benchmark data sets show the advantage of the proposed methods over other state-of-the-art algorithms.

  13. Lossy compression of MERIS superspectral images with exogenous quasi optimal coding transforms

    NASA Astrophysics Data System (ADS)

    Akam Bita, Isidore Paul; Barret, Michel; Dalla Vedova, Florio; Gutzwiller, Jean-Louis

    2009-08-01

    Our research focuses on reducing complexity of hyperspectral image codecs based on transform and/or subband coding, so they can be on-board a satellite. It is well-known that the Karhunen-Loève Transform (KLT) can be sub-optimal in transform coding for non Gaussian data. However, it is generally recommended as the best calculable linear coding transform in practice. Now, the concept and the computation of optimal coding transforms (OCT), under low restrictive hypotheses at high bit-rates, were carried out and adapted to a compression scheme compatible with both the JPEG2000 Part2 standard and the CCSDS recommendations for on-board satellite image compression, leading to the concept and computation of Optimal Spectral Transforms (OST). These linear transforms are optimal for reducing spectral redundancies of multi- or hyper-spectral images, when the spatial redundancies are reduced with a fixed 2-D Discrete Wavelet Transform (DWT). The problem of OST is their heavy computational cost. In this paper we present the performances in coding of a quasi optimal spectral transform, called exogenous OrthOST, obtained by learning an orthogonal OST on a sample of superspectral images from the spectrometer MERIS. The performances are presented in terms of bit-rate versus distortion for four various distortions and compared to the ones of the KLT. We observe good performances of the exogenous OrthOST, as it was the case on Hyperion hyper-spectral images in previous works.

  14. Efficacy of Code Optimization on Cache-Based Processors

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Saphir, William C.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    In this paper a number of techniques for improving the cache performance of a representative piece of numerical software is presented. Target machines are popular processors from several vendors: MIPS R5000 (SGI Indy), MIPS R8000 (SGI PowerChallenge), MIPS R10000 (SGI Origin), DEC Alpha EV4 + EV5 (Cray T3D & T3E), IBM RS6000 (SP Wide-node), Intel PentiumPro (Ames' Whitney), Sun UltraSparc (NERSC's NOW). The optimizations all attempt to increase the locality of memory accesses. But they meet with rather varied and often counterintuitive success on the different computing platforms. We conclude that it may be genuinely impossible to obtain portable performance on the current generation of cache-based machines. At the least, it appears that the performance of modern commodity processors cannot be described with parameters defining the cache alone.

  15. DENSE MEDIA CYCLONE OPTIMIZATION

    SciTech Connect

    Gerald H. Luttrell

    2001-09-10

    The fieldwork associated with Task 1 (Baseline Assessment) was completed this quarter. Detailed cyclone inspections completed at all but one plant during maintenance shifts. Analysis of the test samples is also currently underway in Task 4 (Sample Analysis). A Draft Recommendation was prepared for the management at each test site in Task 2 (Circuit Modification). All required procurements were completed. Density tracers were manufactured and tested for quality control purposes. Special sampling tools were also purchased and/or fabricated for each plant site. The preliminary experimental data show that the partitioning performance for all seven HMC circuits was generally good. This was attributed to well-maintained cyclones and good operating practices. However, the density tracers detected that most circuits suffered from poor control of media cutpoint. These problems were attributed to poor x-ray calibration and improper manual density measurements. These conclusions will be validated after the analyses of the composite samples have been completed.

  16. DENSE MEDIA CYCLONE OPTIMIZATION

    SciTech Connect

    David M. Hyman

    2002-01-14

    All work associated with Task 1 (Baseline Assessment) was successfully completed and preliminary corrections/recommendations were provided back to the management at each test site. Detailed float-sink tests were completed for Site No.1 and are currently underway for Sites No.2-No. 4. Unfortunately, the work associated with sample analyses (Task 4--Sample Analysis) has been delayed because of a backlog of coal samples at the commercial laboratory participating in this project. As a result, a no-cost project time extension may be necessary in order to complete the project. A decision will be made at the end of the next reporting period. Some of the work completed this quarter included (i) development of mass balance routines for data analysis, (ii) formulation of an expert system rule base, (iii) completion of statistical computations and mathematical curve fits for the density tracer test data. In addition, an ''O & M Checklist'' was prepared to provide plant operators with simple operating and maintenance guidelines that must be followed to obtain good HMC performance.

  17. DENSE MEDIA CYCLONE OPTIMIZATION

    SciTech Connect

    Gerald H. Luttrell

    2001-09-10

    The project start date delayed by approximately 7 weeks due to contractual difficulties. Although the original start date was December 14, 2000, the Principal Investigator did not receive the Project Authorization Notice (PAN) from the Virginia Tech Office of Sponsored Programs until February 5, 2001. Therefore, the first project task (i. e., Project Planning) did not begin until February 2001. Activities completed as part of this effort included: (i) revision and updating of the Project Work Plan, (ii) preparation of equipment procurement documents for the Virginia Tech Purchasing Office, and (iii) initiation of preliminary site visits to several coal preparation plants to discuss test work with industrial personnel. After a brief (2 month) contractual delay, project activities are now underway. There are currently no contractual issues or technical problems associated with this project. Project work activities are now expected to proceed in accordance with the proposed project schedule.

  18. DENSE MEDIA CYCLONE OPTIMIZATION

    SciTech Connect

    Gerald H. Luttrell

    2003-09-09

    All technical project activities have been successfully completed. This effort included (1) completion of field testing using density tracers, (2) development of a spreadsheet based HMC simulation program, and (3) preparation of a menu-driven expert system for HMC trouble-shooting. The final project report is now being prepared for submission to DOE comment and review. The submission has been delayed due to difficulties in compiling the large base of technical information generated by the project. Technical personnel are now working to complete this report. Effort is being underway to finalize the financial documents necessary to demonstrate that the cost-sharing requirements for the project have been met.

  19. DENSE MEDIA CYCLONE OPTIMIZATION

    SciTech Connect

    Gerald H. Luttrell

    2003-01-15

    All technical project activities have been successfully completed. This effort included (1) completion of field testing using density tracers, (2) development of a spreadsheet based HMC simulation program, and (3) preparation of a menu-driven expert system for HMC trouble-shooting. The final project report is now being prepared for submission to DOE comment and review. The submission has been delayed due to difficulties in compiling the large base of technical information generated by the project. Technical personnel are now working to complete this report. Effort is being underway to finalize the financial documents necessary to demonstrate that the cost-sharing requirements for the project have been met.

  20. The genetic code and its optimization for kinetic energy conservation in polypeptide chains.

    PubMed

    Guilloux, Antonin; Jestin, Jean-Luc

    2012-08-01

    Why is the genetic code the way it is? Concepts from fields as diverse as molecular evolution, classical chemistry, biochemistry and metabolism have been used to define selection pressures most likely to be involved in the shaping of the genetic code. Here minimization of kinetic energy disturbances during protein evolution by mutation allows an optimization of the genetic code to be highlighted. The quadratic forms corresponding to the kinetic energy term are considered over the field of rational numbers. Arguments are given to support the introduction of notions from basic number theory within this context. The observations found to be consistent with this minimization are statistically significant. The genetic code may well have been optimized according to energetic criteria so as to improve folding and dynamic properties of polypeptide chains.

  1. New developments of the CARTE thermochemical code: I-parameter optimization

    NASA Astrophysics Data System (ADS)

    Desbiens, N.; Dubois, V.

    We present the calibration of the CARTE thermochemical code that allows to compute the properties of a wide variety of CHON explosives. We have developed an optimization procedure to obtain an accurate multicomponents EOS (fluid phase and condensed phase of carbon). We show here that the results of CARTE code are in good agreement with the specific data of molecular systems and we extensively compare our calculations with measured detonation properties for several explosives.

  2. On the optimality of code options for a universal noiseless coder

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Rice, Robert F.; Miller, Warner

    1991-01-01

    A universal noiseless coding structure was developed that provides efficient performance over an extremely broad range of source entropy. This is accomplished by adaptively selecting the best of several easily implemented variable length coding algorithms. Custom VLSI coder and decoder modules capable of processing over 20 million samples per second are currently under development. The first of the code options used in this module development is shown to be equivalent to a class of Huffman code under the Humblet condition, other options are shown to be equivalent to the Huffman codes of a modified Laplacian symbol set, at specified symbol entropy values. Simulation results are obtained on actual aerial imagery, and they confirm the optimality of the scheme. On sources having Gaussian or Poisson distributions, coder performance is also projected through analysis and simulation.

  3. Optimizations of the energy grid search algorithm in continuous-energy Monte Carlo particle transport codes

    NASA Astrophysics Data System (ADS)

    Walsh, Jonathan A.; Romano, Paul K.; Forget, Benoit; Smith, Kord S.

    2015-11-01

    In this work we propose, implement, and test various optimizations of the typical energy grid-cross section pair lookup algorithm in Monte Carlo particle transport codes. The key feature common to all of the optimizations is a reduction in the length of the vector of energies that must be searched when locating the index of a particle's current energy. Other factors held constant, a reduction in energy vector length yields a reduction in CPU time. The computational methods we present here are physics-informed. That is, they are designed to utilize the physical information embedded in a simulation in order to reduce the length of the vector to be searched. More specifically, the optimizations take advantage of information about scattering kinematics, neutron cross section structure and data representation, and also the expected characteristics of a system's spatial flux distribution and energy spectrum. The methods that we present are implemented in the OpenMC Monte Carlo neutron transport code as part of this work. The gains in computational efficiency, as measured by overall code speedup, associated with each of the optimizations are demonstrated in both serial and multithreaded simulations of realistic systems. Depending on the system, simulation parameters, and optimization method employed, overall code speedup factors of 1.2-1.5, relative to the typical single-nuclide binary search algorithm, are routinely observed.

  4. Source-optimized irregular repeat accumulate codes with inherent unequal error protection capabilities and their application to scalable image transmission.

    PubMed

    Lan, Ching-Fu; Xiong, Zixiang; Narayanan, Krishna R

    2006-07-01

    The common practice for achieving unequal error protection (UEP) in scalable multimedia communication systems is to design rate-compatible punctured channel codes before computing the UEP rate assignments. This paper proposes a new approach to designing powerful irregular repeat accumulate (IRA) codes that are optimized for the multimedia source and to exploiting the inherent irregularity in IRA codes for UEP. Using the end-to-end distortion due to the first error bit in channel decoding as the cost function, which is readily given by the operational distortion-rate function of embedded source codes, we incorporate this cost function into the channel code design process via density evolution and obtain IRA codes that minimize the average cost function instead of the usual probability of error. Because the resulting IRA codes have inherent UEP capabilities due to irregularity, the new IRA code design effectively integrates channel code optimization and UEP rate assignments, resulting in source-optimized channel coding or joint source-channel coding. We simulate our source-optimized IRA codes for transporting SPIHT-coded images over a binary symmetric channel with crossover probability p. When p = 0.03 and the channel code length is long (e.g., with one codeword for the whole 512 x 512 image), we are able to operate at only 9.38% away from the channel capacity with code length 132380 bits, achieving the best published results in terms of average peak signal-to-noise ratio (PSNR). Compared to conventional IRA code design (that minimizes the probability of error) with the same code rate, the performance gain in average PSNR from using our proposed source-optimized IRA code design is 0.8759 dB when p = 0.1 and the code length is 12800 bits. As predicted by Shannon's separation principle, we observe that this performance gain diminishes as the code length increases. PMID:16830898

  5. Dispersion-optimized optical fiber for high-speed long-haul dense wavelength division multiplexing transmission

    NASA Astrophysics Data System (ADS)

    Wu, Jindong; Chen, Liuhua; Li, Qingguo; Wu, Wenwen; Sun, Keyuan; Wu, Xingkun

    2011-07-01

    Four non-zero-dispersion-shifted fibers with almost the same large effective area (Aeff) and optimized dispersion properties are realized by novel index profile designing and modified vapor axial deposition and modified chemical vapor deposition processes. An Aeff of greater than 71 μm2 is obtained for the designed fibers. Three of the developed fibers with positive dispersion are improved by reducing the 1550nm dispersion slope from 0.072ps/nm2/km to 0.063ps/nm2/km or 0.05ps/nm2/km, increasing the 1550nm dispersion from 4.972ps/nm/km to 5.679ps/nm/km or 7.776ps/nm/km, and shifting the zero-dispersion wavelength from 1500nm to 1450nm. One of these fibers is in good agreement with G655D and G.656 fibers simultaneously, and another one with G655E and G.656 fibers; both fibers are beneficial to high-bit long-haul dense wavelength division multiplexing systems over S-, C-, and L-bands. The fourth developed fiber with negative dispersion is also improved by reducing the 1550nm dispersion slope from 0.12ps/nm2/km to 0.085ps/nm2/km, increasing the 1550nm dispersion from -4ps/nm/km to -6.016ps/nm/km, providing facilities for a submarine transmission system. Experimental measurements indicate that the developed fibers all have excellent optical transmission and good macrobending and splice performances.

  6. Optimal Multicarrier Phase-Coded Waveform Design for Detection of Extended Targets

    SciTech Connect

    Sen, Satyabrata; Glover, Charles Wayne

    2013-01-01

    We design a parametric multicarrier phase-coded (MCPC) waveform that achieves the optimal performance in detecting an extended target in the presence of signal-dependent interference. Traditional waveform design techniques provide only the optimal energy spectral density of the transmit waveform and suffer a performance loss in the synthesis process of the time-domain signal. Therefore, we opt for directly designing an MCPC waveform in terms of its time-frequency codes to obtain the optimal detection performance. First, we describe the modeling assumptions considering an extended target buried within the signal-dependent clutter with known power spectral density, and deduce the performance characteristics of the optimal detector. Then, considering an MCPC signal transmission, we express the detection characteristics in terms of the phase-codes of the MCPC waveform and propose to optimally design the MCPC signal by maximizing the detection probability. Our numerical results demonstrate that the designed MCPC signal attains the optimal detection performance and requires a lesser computational time than the other parametric waveform design approach.

  7. A coded aperture imaging system optimized for hard X-ray and gamma ray astronomy

    NASA Technical Reports Server (NTRS)

    Gehrels, N.; Cline, T. L.; Huters, A. F.; Leventhal, M.; Maccallum, C. J.; Reber, J. D.; Stang, P. D.; Teegarden, B. J.; Tueller, J.

    1985-01-01

    A coded aperture imaging system was designed for the Gamma-Ray imaging spectrometer (GRIS). The system is optimized for imaging 511 keV positron-annihilation photons. For a galactic center 511-keV source strength of 0.001 sq/s, the source location accuracy is expected to be + or - 0.2 deg.

  8. Liner Optimization Studies Using the Ducted Fan Noise Prediction Code TBIEM3D

    NASA Technical Reports Server (NTRS)

    Dunn, M. H.; Farassat, F.

    1998-01-01

    In this paper we demonstrate the usefulness of the ducted fan noise prediction code TBIEM3D as a liner optimization design tool. Boundary conditions on the interior duct wall allow for hard walls or a locally reacting liner with axially segmented, circumferentially uniform impedance. Two liner optimization studies are considered in which farfield noise attenuation due to the presence of a liner is maximized by adjusting the liner impedance. In the first example, the dependence of optimal liner impedance on frequency and liner length is examined. Results show that both the optimal impedance and attenuation levels are significantly influenced by liner length and frequency. In the second example, TBIEM3D is used to compare radiated sound pressure levels between optimal and non-optimal liner cases at conditions designed to simulate take-off. It is shown that significant noise reduction is achieved for most of the sound field by selecting the optimal or near optimal liner impedance. Our results also indicate that there is relatively large region of the impedance plane over which optimal or near optimal liner behavior is attainable. This is an important conclusion for the designer since there are variations in liner characteristics due to manufacturing imprecisions.

  9. Final Report A Multi-Language Environment For Programmable Code Optimization and Empirical Tuning

    SciTech Connect

    Yi, Qing; Whaley, Richard Clint; Qasem, Apan; Quinlan, Daniel

    2013-11-23

    This report summarizes our effort and results of building an integrated optimization environment to effectively combine the programmable control and the empirical tuning of source-to-source compiler optimizations within the framework of multiple existing languages, specifically C, C++, and Fortran. The environment contains two main components: the ROSE analysis engine, which is based on the ROSE C/C++/Fortran2003 source-to-source compiler developed by Co-PI Dr.Quinlan et. al at DOE/LLNL, and the POET transformation engine, which is based on an interpreted program transformation language developed by Dr. Yi at University of Texas at San Antonio (UTSA). The ROSE analysis engine performs advanced compiler analysis, identifies profitable code transformations, and then produces output in POET, a language designed to provide programmable control of compiler optimizations to application developers and to support the parameterization of architecture-sensitive optimizations so that their configurations can be empirically tuned later. This POET output can then be ported to different machines together with the user application, where a POET-based search engine empirically reconfigures the parameterized optimizations until satisfactory performance is found. Computational specialists can write POET scripts to directly control the optimization of their code. Application developers can interact with ROSE to obtain optimization feedback as well as provide domain-specific knowledge and high-level optimization strategies. The optimization environment is expected to support different levels of automation and programmer intervention, from fully-automated tuning to semi-automated development and to manual programmable control.

  10. The SWAN/NPSOL code system for multivariable multiconstraint shield optimization

    SciTech Connect

    Watkins, E.F.; Greenspan, E.

    1995-12-31

    SWAN is a useful code for optimization of source-driven systems, i.e., systems for which the neutron and photon distribution is the solution of the inhomogeneous transport equation. Over the years, SWAN has been applied to the optimization of a variety of nuclear systems, such as minimizing the thickness of fusion reactor blankets and shields, the weight of space reactor shields, the cost for an ICF target chamber shield, and the background radiation for explosive detection systems and maximizing the beam quality for boron neutron capture therapy applications. However, SWAN`s optimization module can handle up to a single constraint and was inefficient in handling problems with many variables. The purpose of this work is to upgrade SWAN`s optimization capability.

  11. Tunable wavefront coded imaging system based on detachable phase mask: Mathematical analysis, optimization and underlying applications

    NASA Astrophysics Data System (ADS)

    Zhao, Hui; Wei, Jingxuan

    2014-09-01

    The key to the concept of tunable wavefront coding lies in detachable phase masks. Ojeda-Castaneda et al. (Progress in Electronics Research Symposium Proceedings, Cambridge, USA, July 5-8, 2010) described a typical design in which two components with cosinusoidal phase variation operate together to make defocus sensitivity tunable. The present study proposes an improved design and makes three contributions: (1) A mathematical derivation based on the stationary phase method explains why the detachable phase mask of Ojeda-Castaneda et al. tunes the defocus sensitivity. (2) The mathematical derivations show that the effective bandwidth wavefront coded imaging system is also tunable by making each component of the detachable phase mask move asymmetrically. An improved Fisher information-based optimization procedure was also designed to ascertain the optimal mask parameters corresponding to specific bandwidth. (3) Possible applications of the tunable bandwidth are demonstrated by simulated imaging.

  12. On the Optimized Atomic Exchange Potential method and the CASSANDRA opacity code

    NASA Astrophysics Data System (ADS)

    Jeffery, M.; Harris, J. W. O.; Hoarty, D. J.

    2016-09-01

    The CASSANDRA, average atom, opacity code uses the local density approximation (LDA) to calculate electron exchange interactions and this introduces inaccuracies due to the inconsistent treatment of the Coulomb and exchange energy terms of the average total energy equation. To correct this inconsistency, the Optimized Atomic Central Potential Method (OPM) of calculating exchange interactions has been incorporated into CASSANDRA. The LDA and OPM formalisms are discussed and the reason for the discrepancy when using the LDA is highlighted. CASSANDRA uses a Taylor series expansion about an average atom when computing transition energies and uses Janak's Theorem to determine the Taylor series coefficients. Janak's Theorem does not apply to the OPM; however, a corollary to Janak's Theorem has been employed in the OPM implementation. A derivation of this corollary is provided. Results of simulations from CASSANDRA using the OPM are shown and compared against CASSANDRA LDA, DAVROS (a detailed term accounting opacity code), the GRASP2K atomic physics code and experimental data.

  13. SPRITE - A computer code for the optimization of space based heat pipe radiator systems

    NASA Technical Reports Server (NTRS)

    Buksa, John J.; Williams, Kenneth A.

    1989-01-01

    An integrated analytical tool has been developed for use in designing optimized space-based heat pipe radiator systems. This code, SPRITE-1, incorporates the thermal, structural, and reliability aspects of the radiator into a single framework from which a physically consistent design can be obtained. A parametric study of the integral heat pipe panel radiator was performed using SPRITE-1, and a preliminary minimum mass design was obtained. The radiator design is summarized, and the mass minimization method and results are presented.

  14. From Physics Model to Results: An Optimizing Framework for Cross-Architecture Code Generation

    DOE PAGESBeta

    Blazewicz, Marek; Hinder, Ian; Koppelman, David M.; Brandt, Steven R.; Ciznicki, Milosz; Kierzynka, Michal; Löffler, Frank; Schnetter, Erik; Tao, Jian

    2013-01-01

    Starting from a high-level problem description in terms of partial differential equations using abstract tensor notation, the Chemora framework discretizes, optimizes, and generates complete high performance codes for a wide range of compute architectures. Chemora extends the capabilities of Cactus, facilitating the usage of large-scale CPU/GPU systems in an efficient manner for complex applications, without low-level code tuning. Chemora achieves parallelism through MPI and multi-threading, combining OpenMP and CUDA. Optimizations include high-level code transformations, efficient loop traversal strategies, dynamically selected data and instruction cache usage strategies, and JIT compilation of GPU code tailored to the problem characteristics. The discretizationmore » is based on higher-order finite differences on multi-block domains. Chemora's capabilities are demonstrated by simulations of black hole collisions. This problem provides an acid test of the framework, as the Einstein equations contain hundreds of variables and thousands of terms.« less

  15. An application of anti-optimization in the process of validating aerodynamic codes

    NASA Astrophysics Data System (ADS)

    Cruz, Juan R.

    An investigation was conducted to assess the usefulness of anti-optimization in the process of validating of aerodynamic codes. Anti-optimization is defined here as the intentional search for regions where the computational and experimental results disagree. Maximizing such disagreements can be a useful tool in uncovering errors and/or weaknesses in both analyses and experiments. The codes chosen for this investigation were an airfoil code and a lifting line code used together as an analysis to predict three-dimensional wing aerodynamic coefficients. The parameter of interest was the maximum lift coefficient of the three-dimensional wing, CL max. The test domain encompassed Mach numbers from 0.3 to 0.8, and Reynolds numbers from 25,000 to 250,000. A simple rectangular wing was designed for the experiment. A wind tunnel model of this wing was built and tested in the NASA Langley Transonic Dynamics Tunnel. Selection of the test conditions (i.e., Mach and Reynolds numbers) were made by applying the techniques of response surface methodology and considerations involving the predicted experimental uncertainty. The test was planned and executed in two phases. In the first phase runs were conducted at the pre-planned test conditions. Based on these results additional runs were conducted in areas where significant differences in CL max were observed between the computational results and the experiment---in essence applying the concept of anti-optimization. These additional runs were used to verify the differences in CL max and assess the extent of the region where these differences occurred. The results of the experiment showed that the analysis was capable of predicting CL max to within 0.05 over most of the test domain. The application of anti-optimization succeeded in identifying a region where the computational and experimental values of C L max differed by more than 0.05, demonstrating the usefulness of anti-optimization in process of validating aerodynamic codes

  16. Development of free-piston Stirling engine performance and optimization codes based on Martini simulation technique

    NASA Technical Reports Server (NTRS)

    Martini, William R.

    1989-01-01

    A FORTRAN computer code is described that could be used to design and optimize a free-displacer, free-piston Stirling engine similar to the RE-1000 engine made by Sunpower. The code contains options for specifying displacer and power piston motion or for allowing these motions to be calculated by a force balance. The engine load may be a dashpot, inertial compressor, hydraulic pump or linear alternator. Cycle analysis may be done by isothermal analysis or adiabatic analysis. Adiabatic analysis may be done using the Martini moving gas node analysis or the Rios second-order Runge-Kutta analysis. Flow loss and heat loss equations are included. Graphical display of engine motions and pressures and temperatures are included. Programming for optimizing up to 15 independent dimensions is included. Sample performance results are shown for both specified and unconstrained piston motions; these results are shown as generated by each of the two Martini analyses. Two sample optimization searches are shown using specified piston motion isothermal analysis. One is for three adjustable input and one is for four. Also, two optimization searches for calculated piston motion are presented for three and for four adjustable inputs. The effect of leakage is evaluated. Suggestions for further work are given.

  17. On the Efficacy of Source Code Optimizations for Cache-Based Systems

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Saphir, William C.

    1998-01-01

    Obtaining high performance without machine-specific tuning is an important goal of scientific application programmers. Since most scientific processing is done on commodity microprocessors with hierarchical memory systems, this goal of "portable performance" can be achieved if a common set of optimization principles is effective for all such systems. It is widely believed, or at least hoped, that portable performance can be realized. The rule of thumb for optimization on hierarchical memory systems is to maximize temporal and spatial locality of memory references by reusing data and minimizing memory access stride. We investigate the effects of a number of optimizations on the performance of three related kernels taken from a computational fluid dynamics application. Timing the kernels on a range of processors, we observe an inconsistent and often counterintuitive impact of the optimizations on performance. In particular, code variations that have a positive impact on one architecture can have a negative impact on another, and variations expected to be unimportant can produce large effects. Moreover, we find that cache miss rates - as reported by a cache simulation tool, and confirmed by hardware counters - only partially explain the results. By contrast, the compiler-generated assembly code provides more insight by revealing the importance of processor-specific instructions and of compiler maturity, both of which strongly, and sometimes unexpectedly, influence performance. We conclude that it is difficult to obtain performance portability on modern cache-based computers, and comment on the implications of this result.

  18. On the Efficacy of Source Code Optimizations for Cache-Based Systems

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Saphir, William C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    Obtaining high performance without machine-specific tuning is an important goal of scientific application programmers. Since most scientific processing is done on commodity microprocessors with hierarchical memory systems, this goal of "portable performance" can be achieved if a common set of optimization principles is effective for all such systems. It is widely believed, or at least hoped, that portable performance can be realized. The rule of thumb for optimization on hierarchical memory systems is to maximize temporal and spatial locality of memory references by reusing data and minimizing memory access stride. We investigate the effects of a number of optimizations on the performance of three related kernels taken from a computational fluid dynamics application. Timing the kernels on a range of processors, we observe an inconsistent and often counterintuitive impact of the optimizations on performance. In particular, code variations that have a positive impact on one architecture can have a negative impact on another, and variations expected to be unimportant can produce large effects. Moreover, we find that cache miss rates-as reported by a cache simulation tool, and confirmed by hardware counters-only partially explain the results. By contrast, the compiler-generated assembly code provides more insight by revealing the importance of processor-specific instructions and of compiler maturity, both of which strongly, and sometimes unexpectedly, influence performance. We conclude that it is difficult to obtain performance portability on modern cache-based computers, and comment on the implications of this result.

  19. Finite population analysis of the effect of horizontal gene transfer on the origin of an universal and optimal genetic code.

    PubMed

    Aggarwal, Neha; Bandhu, Ashutosh Vishwa; Sengupta, Supratim

    2016-05-27

    The origin of a universal and optimal genetic code remains a compelling mystery in molecular biology and marks an essential step in the origin of DNA and protein based life. We examine a collective evolution model of genetic code origin that allows for unconstrained horizontal transfer of genetic elements within a finite population of sequences each of which is associated with a genetic code selected from a pool of primordial codes. We find that when horizontal transfer of genetic elements is incorporated in this more realistic model of code-sequence coevolution in a finite population, it can increase the likelihood of emergence of a more optimal code eventually leading to its universality through fixation in the population. The establishment of such an optimal code depends on the probability of HGT events. Only when the probability of HGT events is above a critical threshold, we find that the ten amino acid code having a structure that is most consistent with the standard genetic code (SGC) often gets fixed in the population with the highest probability. We examine how the threshold is determined by factors like the population size, length of the sequences and selection coefficient. Our simulation results reveal the conditions under which sharing of coding innovations through horizontal transfer of genetic elements may have facilitated the emergence of a universal code having a structure similar to that of the SGC.

  20. Finite population analysis of the effect of horizontal gene transfer on the origin of an universal and optimal genetic code

    NASA Astrophysics Data System (ADS)

    Aggarwal, Neha; Vishwa Bandhu, Ashutosh; Sengupta, Supratim

    2016-06-01

    The origin of a universal and optimal genetic code remains a compelling mystery in molecular biology and marks an essential step in the origin of DNA and protein based life. We examine a collective evolution model of genetic code origin that allows for unconstrained horizontal transfer of genetic elements within a finite population of sequences each of which is associated with a genetic code selected from a pool of primordial codes. We find that when horizontal transfer of genetic elements is incorporated in this more realistic model of code-sequence coevolution in a finite population, it can increase the likelihood of emergence of a more optimal code eventually leading to its universality through fixation in the population. The establishment of such an optimal code depends on the probability of HGT events. Only when the probability of HGT events is above a critical threshold, we find that the ten amino acid code having a structure that is most consistent with the standard genetic code (SGC) often gets fixed in the population with the highest probability. We examine how the threshold is determined by factors like the population size, length of the sequences and selection coefficient. Our simulation results reveal the conditions under which sharing of coding innovations through horizontal transfer of genetic elements may have facilitated the emergence of a universal code having a structure similar to that of the SGC.

  1. Optimal joint power-rate adaptation for error resilient video coding

    NASA Astrophysics Data System (ADS)

    Lin, Yuan; Gürses, Eren; Kim, Anna N.; Perkis, Andrew

    2008-01-01

    In recent years digital imaging devices become an integral part of our daily lives due to the advancements in imaging, storage and wireless communication technologies. Power-Rate-Distortion efficiency is the key factor common to all resource constrained portable devices. In addition, especially in real-time wireless multimedia applications, channel adaptive and error resilient source coding techniques should be considered in conjunction with the P-R-D efficiency, since most of the time Automatic Repeat-reQuest (ARQ) and Forward Error Correction (FEC) are either not feasible or costly in terms of bandwidth efficiency delay. In this work, we focus on the scenarios of real-time video communication for resource constrained devices over bandwidth limited and lossy channels, and propose an analytic Power-channel Error-Rate-Distortion (P-E-R-D) model. In particular, probabilities of macroblocks coding modes are intelligently controlled through an optimization process according to their distinct rate-distortion-complexity performance for a given channel error rate. The framework provides theoretical guidelines for the joint analysis of error resilient source coding and resource allocation. Experimental results show that our optimal framework provides consistent rate-distortion performance gain under different power constraints.

  2. Optimizing the search for high-z GRBs:. the JANUS X-ray coded aperture telescope

    NASA Astrophysics Data System (ADS)

    Burrows, D. N.; Fox, D.; Palmer, D.; Romano, P.; Mangano, V.; La Parola, V.; Falcone, A. D.; Roming, P. W. A.

    We discuss the optimization of gamma-ray burst (GRB) detectors with a goal of maximizing the detected number of bright high-redshift GRBs, in the context of design studies conducted for the X-ray transient detector on the JANUS mission. We conclude that the optimal energy band for detection of high-z GRBs is below about 30 keV. We considered both lobster-eye and coded aperture designs operating in this energy band. Within the available mass and power constraints, we found that the coded aperture mask was preferred for the detection of high-z bursts with bright enough afterglows to probe galaxies in the era of the Cosmic Dawn. This initial conclusion was confirmed through detailed mission simulations that found that the selected design (an X-ray Coded Aperture Telescope) would detect four times as many bright, high-z GRBs as the lobster-eye design we considered. The JANUS XCAT instrument will detect 48 GRBs with z>5 and fluence S_x > 3 × 10-7 erg cm-2 in a two year mission.

  3. The SWAN-SCALE code for the optimization of critical systems

    SciTech Connect

    Greenspan, E.; Karni, Y.; Regev, D.; Petrie, L.M.

    1999-07-01

    The SWAN optimization code was recently developed to identify the maximum value of k{sub eff} for a given mass of fissile material when in combination with other specified materials. The optimization process is iterative; in each iteration SWAN varies the zone-dependent concentration of the system constituents. This change is guided by the equal volume replacement effectiveness functions (EVREF) that SWAN generates using first-order perturbation theory. Previously, SWAN did not have provisions to account for the effect of the composition changes on neutron cross-section resonance self-shielding; it used the cross sections corresponding to the initial system composition. In support of the US Department of Energy Nuclear Criticality Safety Program, the authors recently removed the limitation on resonance self-shielding by coupling SWAN with the SCALE code package. The purpose of this paper is to briefly describe the resulting SWAN-SCALE code and to illustrate the effect that neutron cross-section self-shielding could have on the maximum k{sub eff} and on the corresponding system composition.

  4. Fast simulated annealing and adaptive Monte Carlo sampling based parameter optimization for dense optical-flow deformable image registration of 4DCT lung anatomy

    NASA Astrophysics Data System (ADS)

    Dou, Tai H.; Min, Yugang; Neylon, John; Thomas, David; Kupelian, Patrick; Santhanam, Anand P.

    2016-03-01

    Deformable image registration (DIR) is an important step in radiotherapy treatment planning. An optimal input registration parameter set is critical to achieve the best registration performance with the specific algorithm. Methods In this paper, we investigated a parameter optimization strategy for Optical-flow based DIR of the 4DCT lung anatomy. A novel fast simulated annealing with adaptive Monte Carlo sampling algorithm (FSA-AMC) was investigated for solving the complex non-convex parameter optimization problem. The metric for registration error for a given parameter set was computed using landmark-based mean target registration error (mTRE) between a given volumetric image pair. To reduce the computational time in the parameter optimization process, a GPU based 3D dense optical-flow algorithm was employed for registering the lung volumes. Numerical analyses on the parameter optimization for the DIR were performed using 4DCT datasets generated with breathing motion models and open-source 4DCT datasets. Results showed that the proposed method efficiently estimated the optimum parameters for optical-flow and closely matched the best registration parameters obtained using an exhaustive parameter search method.

  5. Optimizing performance of superscalar codes for a single Cray X1MSP processor

    SciTech Connect

    Shan, Hongzhang; Strohmaier, Erich; Oliker, Leonid

    2004-06-08

    The growing gap between sustained and peak performance for full-scale complex scientific applications on conventional supercomputers is a major concern in high performance computing. The recently-released vector-based Cray X1 offers to bridge this gap for many demanding scientific applications. However, this unique architecture contains both data caches and multi-streaming processing units, and the optimal programming methodology is still under investigation. In this paper we investigate Cray X1 code optimization for a suite of computational kernels originally designed for superscalar processors. For our study, we select four applications from the SPLASH2 application suite (1-D FFT,Radix, Ocean, and Nbody), two kernels from the NAS benchmark suite (3-DFFT and CG), and a matrix-matrix multiplication kernel. Results show that for many cases, the addition of vectorization compiler directives results faster runtimes. However, to achieve a significant performance improvement via increased vector length, it is often necessary to restructure the program at the source level sometimes leading to algorithmic level transformations. Additionally, memory bank conflicts may result in substantial performance losses. These conflicts can often be exacerbated when optimizing code for increased vector lengths, and must be explicitly minimized. Finally, we investigate the relationship of the X1 data caches on overall performance.

  6. Optimal tracking performance of MIMO control systems with communication constraints and a code scheme

    NASA Astrophysics Data System (ADS)

    Zhan, Xi-Sheng; Guan, Zhi-Hong; Zhang, Xian-He; Yuan, Fu-Shun

    2015-02-01

    This paper investigates the issue of the optimal tracking performance for multiple-input multiple-output linear time-invariant continuous-time systems with power constrained. An H2 criterion of the error signal and the signal of the input channel are used as a measure for the tracking performance. A code scheme is introduced as a means of integrating controller and channel design to obtain the optimal tracking performance. It is shown that the optimal tracking performance index consists of two parts, one depends on the non-minimum phase zeros and zero direction of the given plant, as well as the reference input signal, while the other depends on the unstable poles and pole direction of the given plant, as well as on the bandwidth and additive white noise of a communication channel. It is also shown that when the communication does not exist, the optimal tracking performance reduces to the existing normal tracking performance of the control system. The results show how the optimal tracking performance is limited by the bandwidth and additive white noise of the communication channel. A typical example is given to illustrate the theoretical results.

  7. Code Optimization and Parallelization on the Origins: Looking from Users' Perspective

    NASA Technical Reports Server (NTRS)

    Chang, Yan-Tyng Sherry; Thigpen, William W. (Technical Monitor)

    2002-01-01

    Parallel machines are becoming the main compute engines for high performance computing. Despite their increasing popularity, it is still a challenge for most users to learn the basic techniques to optimize/parallelize their codes on such platforms. In this paper, we present some experiences on learning these techniques for the Origin systems at the NASA Advanced Supercomputing Division. Emphasis of this paper will be on a few essential issues (with examples) that general users should master when they work with the Origins as well as other parallel systems.

  8. An Integer-Coded Chaotic Particle Swarm Optimization for Traveling Salesman Problem

    NASA Astrophysics Data System (ADS)

    Yue, Chen; Yan-Duo, Zhang; Jing, Lu; Hui, Tian

    Traveling Salesman Problem (TSP) is one of NP-hard combinatorial optimization problems, which will experience “combination explosion” when the problem goes beyond a certain size. Therefore, it has been a hot topic to search an effective solving method. The general mathematical model of TSP is discussed, and its permutation and combination based model is presented. Based on these, Integer-coded Chaotic Particle Swarm Optimization for solving TSP is proposed. Where, particle is encoded with integer; chaotic sequence is used to guide global search; and particle varies its positions via “flying”. With a typical 20-citys TSP as instance, the simulation experiment of comparing ICPSO with GA is carried out. Experimental results demonstrate that ICPSO is simple but effective, and better than GA at performance.

  9. Optimization of wavefront-coded infinity-corrected microscope systems with extended depth of field.

    PubMed

    Zhao, Tingyu; Mauger, Thomas; Li, Guoqiang

    2013-01-01

    The depth of field of an infinity-corrected microscope system is greatly extended by simply applying a specially designed phase mask between the objective and the tube lens. In comparison with the method of modifying the structure of objective, it is more cost effective and provides improved flexibility for assembling the system. Instead of using an ideal optical system for simulation which was the focus of the previous research, a practical wavefront-coded infinity-corrected microscope system is designed in this paper by considering the various aberrations. Two new optimization methods, based on the commercial optical design software, are proposed to design a wavefront-coded microscope using a non-symmetric phase mask and a symmetric phase mask, respectively. We use polynomial phase mask and rational phase mask as examples of the non-symmetric and symmetric phase masks respectively. Simulation results show that both optimization methods work well for a 32 × infinity-corrected microscope system with 0.6 numerical aperture. The depth of field is extended to about 13 times of the traditional one.

  10. Optimized conical shaped charge design using the SCAP (Shaped Charge Analysis Program) code

    SciTech Connect

    Vigil, M.G.

    1988-09-01

    The Shaped Charge Analysis Program (SCAP) is used to analytically model and optimize the design of Conical Shaped Charges (CSC). A variety of existing CSCs are initially modeled with the SCAP code and the predicted jet tip velocities, jet penetrations, and optimum standoffs are compared to previously published experimental results. The CSCs vary in size from 0.69 inch (1.75 cm) to 9.125 inch (23.18 cm) conical liner inside diameter. Two liner materials (copper and steel) and several explosives (Octol, Comp B, PBX-9501) are included in the CSCs modeled. The target material was mild steel. A parametric study was conducted using the SCAP code to obtain the optimum design for a 3.86 inch (9.8 cm) CSC. The variables optimized in this study included the CSC apex angle, conical liner thickness, explosive height, optimum standoff, tamper/confinement thickness, and explosive width. The non-dimensionalized jet penetration to diameter ratio versus the above parameters are graphically presented. 12 refs., 10 figs., 7 tabs.

  11. Optimizing Antenna Layout for ITER Low Field Side Reflectometer using 3D Ray Tracing Code

    NASA Astrophysics Data System (ADS)

    Newbury, Sarah; Zolfaghari, Ali

    2014-10-01

    The ITER Low Field Side Reflectometer (LFSR) is being designed to provide electron density profile measurements for both the core and edge plasma through the launching of millimeter waves into the plasma and the detection of the signal of the reflected wave by a receive antenna. Because the detection of the received signal is integral to the determination of the density profile, an important goal in designing the LFSR is to optimize the coupling between launch and receive antennas. This project investigates this subject by using Genray, a 3D ray tracing code, to simulate the propagation of millimeter waves launched into and reflected by the plasma for a typical ITER case. Based upon the results of the code, beam footprints will be estimated for different cases in which both the height and toroidal angle of the launch antenna are varied. The footprints will be compared, allowing conclusions to be drawn about the optimal antenna layout for the LFSR. This method will be carried out for various frequencies of both O-mode and X-mode waves, and the effect of the scrape-off layer of the plasma will also be considered.

  12. Acceleration of the Geostatistical Software Library (GSLIB) by code optimization and hybrid parallel programming

    NASA Astrophysics Data System (ADS)

    Peredo, Oscar; Ortiz, Julián M.; Herrero, José R.

    2015-12-01

    The Geostatistical Software Library (GSLIB) has been used in the geostatistical community for more than thirty years. It was designed as a bundle of sequential Fortran codes, and today it is still in use by many practitioners and researchers. Despite its widespread use, few attempts have been reported in order to bring this package to the multi-core era. Using all CPU resources, GSLIB algorithms can handle large datasets and grids, where tasks are compute- and memory-intensive applications. In this work, a methodology is presented to accelerate GSLIB applications using code optimization and hybrid parallel processing, specifically for compute-intensive applications. Minimal code modifications are added decreasing as much as possible the elapsed time of execution of the studied routines. If multi-core processing is available, the user can activate OpenMP directives to speed up the execution using all resources of the CPU. If multi-node processing is available, the execution is enhanced using MPI messages between the compute nodes.Four case studies are presented: experimental variogram calculation, kriging estimation, sequential gaussian and indicator simulation. For each application, three scenarios (small, large and extra large) are tested using a desktop environment with 4 CPU-cores and a multi-node server with 128 CPU-nodes. Elapsed times, speedup and efficiency results are shown.

  13. An investigation of design optimization using a 2-D viscous flow code with multigrid

    NASA Technical Reports Server (NTRS)

    Doria, Michael L.

    1990-01-01

    Computational fluid dynamics (CFD) codes have advanced to the point where they are effective analytical tools for solving flow fields around complex geometries. There is also a need for their use as a design tool to find optimum aerodynamic shapes. In the area of design, however, a difficulty arises due to the large amount of computer resources required by these codes. It is desired to streamline the design process so that a large number of design options and constraints can be investigated without overloading the system. There are several techniques which have been proposed to help streamline the design process. The feasibility of one of these techniques is investigated. The technique under consideration is the interaction of the geometry change with the flow calculation. The problem of finding the value of camber which maximizes the ratio of lift over drag for a particular airfoil is considered. In order to test out this technique, a particular optimization problem was tried. A NACA 0012 airfoil was considered at free stream Mach number of 0.5 with a zero angle of attack. Camber was added to the mean line of the airfoil. The goal was to find the value of camber for which the ratio of lift over drag is a maximum. The flow code used was FLOMGE which is a two dimensional viscous flow solver which uses multigrid to speed up convergence. A hyperbolic grid generation program was used to construct the grid for each value of camber.

  14. Dense, shape‐optimized posterior 32‐channel coil for submillimeter functional imaging of visual cortex at 3T

    PubMed Central

    Grigorov, Filip; van der Kouwe, Andre J.; Wald, Lawrence L.; Keil, Boris

    2015-01-01

    Purpose Functional neuroimaging of small cortical patches such as columns is essential for testing computational models of vision, but imaging from cortical columns at conventional 3T fields is exceedingly difficult. By targeting the visual cortex exclusively, we tested whether combined optimization of shape, coil placement, and electronics would yield the necessary gains in signal‐to‐noise ratio (SNR) for submillimeter visual cortex functional MRI (fMRI). Method We optimized the shape of the housing to a population‐averaged atlas. The shape was comfortable without cushions and resulted in the maximally proximal placement of the coil elements. By using small wire loops with the least number of solder joints, we were able to maximize the Q factor of the individual elements. Finally, by planning the placement of the coils using the brain atlas, we were able to target the arrangement of the coil elements to the extent of the visual cortex. Results The combined optimizations led to as much as two‐fold SNR gain compared with a whole‐head 32‐channel coil. This gain was reflected in temporal SNR as well and enabled fMRI mapping at 0.75 mm resolutions using a conventional GRAPPA‐accelerated gradient echo echo planar imaging. Conclusion Integrated optimization of shape, electronics, and element placement can lead to large gains in SNR and empower submillimeter fMRI at 3T. Magn Reson Med 76:321–328, 2016. © 2015 Wiley Periodicals, Inc. PMID:26218835

  15. Detection optimization using linear systems analysis of a coded aperture laser sensor system

    SciTech Connect

    Gentry, S.M.

    1994-09-01

    Minimum detectable irradiance levels for a diffraction grating based laser sensor were calculated to be governed by clutter noise resulting from reflected earth albedo. Features on the earth surface caused pseudo-imaging effects on the sensor`s detector arras that resulted in the limiting noise in the detection domain. It was theorized that a custom aperture transmission function existed that would optimize the detection of laser sources against this clutter background. Amplitude and phase aperture functions were investigated. Compared to the diffraction grating technique, a classical Young`s double-slit aperture technique was investigated as a possible optimized solution but was not shown to produce a system that had better clutter-noise limited minimum detectable irradiance. Even though the double-slit concept was not found to have a detection advantage over the slit-grating concept, one interesting concept grew out of the double-slit design that deserved mention in this report, namely the Barker-coded double-slit. This diffractive aperture design possessed properties that significantly improved the wavelength accuracy of the double-slit design. While a concept was not found to beat the slit-grating concept, the methodology used for the analysis and optimization is an example of the application of optoelectronic system-level linear analysis. The techniques outlined here can be used as a template for analysis of a wide range of optoelectronic systems where the entire system, both optical and electronic, contribute to the detection of complex spatial and temporal signals.

  16. Bio-optimized energy transfer in densely packed fluorescent protein enables near-maximal luminescence and solid-state lasers

    NASA Astrophysics Data System (ADS)

    Gather, Malte C.; Yun, Seok Hyun

    2014-12-01

    Bioluminescent organisms are likely to have an evolutionary drive towards high radiance. As such, bio-optimized materials derived from them hold great promise for photonic applications. Here, we show that biologically produced fluorescent proteins retain their high brightness even at the maximum density in solid state through a special molecular structure that provides optimal balance between high protein concentration and low resonance energy transfer self-quenching. Dried films of green fluorescent protein show low fluorescence quenching (-7 dB) and support strong optical amplification (gnet=22 cm-1 96 dB cm-1). Using these properties, we demonstrate vertical cavity surface emitting micro-lasers with low threshold (<100 pJ, outperforming organic semiconductor lasers) and self-assembled all-protein ring lasers. Moreover, solid-state blends of different proteins support efficient Förster resonance energy transfer, with sensitivity to intermolecular distance thus allowing all-optical sensing. The design of fluorescent proteins may be exploited for bio-inspired solid-state luminescent molecules or nanoparticles.

  17. Bio-optimized energy transfer in densely packed fluorescent protein enables near-maximal luminescence and solid-state lasers.

    PubMed

    Gather, Malte C; Yun, Seok Hyun

    2014-12-08

    Bioluminescent organisms are likely to have an evolutionary drive towards high radiance. As such, bio-optimized materials derived from them hold great promise for photonic applications. Here, we show that biologically produced fluorescent proteins retain their high brightness even at the maximum density in solid state through a special molecular structure that provides optimal balance between high protein concentration and low resonance energy transfer self-quenching. Dried films of green fluorescent protein show low fluorescence quenching (-7 dB) and support strong optical amplification (gnet=22 cm(-1); 96 dB cm(-1)). Using these properties, we demonstrate vertical cavity surface emitting micro-lasers with low threshold (<100 pJ, outperforming organic semiconductor lasers) and self-assembled all-protein ring lasers. Moreover, solid-state blends of different proteins support efficient Förster resonance energy transfer, with sensitivity to intermolecular distance thus allowing all-optical sensing. The design of fluorescent proteins may be exploited for bio-inspired solid-state luminescent molecules or nanoparticles.

  18. Real Time Optimizing Code for Stabilization and Control of Plasma Reactors

    1995-09-25

    LOOP4 is a flexible real-time control code that acquires signals (input variables) from an array of sensors, that computes therefrom the actual state of the reactor system, that compares the actual state to the desired state (a goal), and that commands changes to reactor controls (output, or manipulated variables) in order to minimize the difference between the actual state of the reactor and the desired state. The difference between actual and desired states is quantifiedmore » in terms of a distance metric in the space defined by the sensor measurements. The desired state of the reactor is specified in terms of target values of sensor readings that were obtained previously during development and optimization of a process engineer using conventional techniques.« less

  19. An optimal decision population code that accounts for correlated variability unambiguously predicts a subject's choice.

    PubMed

    Carnevale, Federico; de Lafuente, Victor; Romo, Ranulfo; Parga, Néstor

    2013-12-18

    Decisions emerge from the concerted activity of neuronal populations distributed across brain circuits. However, the analytical tools best suited to decode decision signals from neuronal populations remain unknown. Here we show that knowledge of correlated variability between pairs of cortical neurons allows perfect decoding of decisions from population firing rates. We recorded pairs of neurons from secondary somatosensory (S2) and premotor (PM) cortices while monkeys reported the presence or absence of a tactile stimulus. We found that while populations of S2 and sensory-like PM neurons are only partially correlated with behavior, those PM neurons active during a delay period preceding the motor report predict unequivocally the animal's decision report. Thus, a population rate code that optimally reveals a subject's perceptual decisions can be implemented just by knowing the correlations of PM neurons representing decision variables.

  20. Optimization of Parallel Legendre Transform using Graphics Processing Unit (GPU) for a Geodynamo Code

    NASA Astrophysics Data System (ADS)

    Lokavarapu, H. V.; Matsui, H.

    2015-12-01

    Convection and magnetic field of the Earth's outer core are expected to have vast length scales. To resolve these flows, high performance computing is required for geodynamo simulations using spherical harmonics transform (SHT), a significant portion of the execution time is spent on the Legendre transform. Calypso is a geodynamo code designed to model magnetohydrodynamics of a Boussinesq fluid in a rotating spherical shell, such as the outer core of the Earth. The code has been shown to scale well on computer clusters capable of computing at the order of 10⁵ cores using Message Passing Interface (MPI) and Open Multi-Processing (OpenMP) parallelization for CPUs. To further optimize, we investigate three different algorithms of the SHT using GPUs. One is to preemptively compute the Legendre polynomials on the CPU before executing SHT on the GPU within the time integration loop. In the second approach, both the Legendre polynomials and the SHT are computed on the GPU simultaneously. In the third approach , we initially partition the radial grid for the forward transform and the harmonic order for the backward transform between the CPU and GPU. There after, the partitioned works are simultaneously computed in the time integration loop. We examine the trade-offs between space and time, memory bandwidth and GPU computations on Maverick, a Texas Advanced Computing Center (TACC) supercomputer. We have observed improved performance using a GPU enabled Legendre transform. Furthermore, we will compare and contrast the different algorithms in the context of GPUs.

  1. Variational-average-atom-in-quantum-plasmas (VAAQP) code and virial theorem: Equation-of-state and shock-Hugoniot calculations for warm dense Al, Fe, Cu, and Pb

    SciTech Connect

    Piron, R.; Blenski, T.

    2011-02-15

    The numerical code VAAQP (variational average atom in quantum plasmas), which is based on a fully variational model of equilibrium dense plasmas, is applied to equation-of-state calculations for aluminum, iron, copper, and lead in the warm-dense-matter regime. VAAQP does not impose the neutrality of the Wigner-Seitz ion sphere; it provides the average-atom structure and the mean ionization self-consistently from the solution of the variational equations. The formula used for the electronic pressure is simple and does not require any numerical differentiation. In this paper, the virial theorem is derived in both nonrelativistic and relativistic versions of the model. This theorem allows one to express the electron pressure as a combination of the electron kinetic and interaction energies. It is shown that the model fulfills automatically the virial theorem in the case of local-density approximations to the exchange-correlation free-energy. Applications of the model to the equation-of-state and Hugoniot shock adiabat of aluminum, iron, copper, and lead in the warm-dense-matter regime are presented. Comparisons with other approaches, including the inferno model, and with available experimental data are given. This work allows one to understand the thermodynamic consistency issues in the existing average-atom models. Starting from the case of aluminum, a comparative study of the thermodynamic consistency of the models is proposed. A preliminary study of the validity domain of the inferno model is also included.

  2. Variational-average-atom-in-quantum-plasmas (VAAQP) code and virial theorem: equation-of-state and shock-Hugoniot calculations for warm dense Al, Fe, Cu, and Pb.

    PubMed

    Piron, R; Blenski, T

    2011-02-01

    The numerical code VAAQP (variational average atom in quantum plasmas), which is based on a fully variational model of equilibrium dense plasmas, is applied to equation-of-state calculations for aluminum, iron, copper, and lead in the warm-dense-matter regime. VAAQP does not impose the neutrality of the Wigner-Seitz ion sphere; it provides the average-atom structure and the mean ionization self-consistently from the solution of the variational equations. The formula used for the electronic pressure is simple and does not require any numerical differentiation. In this paper, the virial theorem is derived in both nonrelativistic and relativistic versions of the model. This theorem allows one to express the electron pressure as a combination of the electron kinetic and interaction energies. It is shown that the model fulfills automatically the virial theorem in the case of local-density approximations to the exchange-correlation free-energy. Applications of the model to the equation-of-state and Hugoniot shock adiabat of aluminum, iron, copper, and lead in the warm-dense-matter regime are presented. Comparisons with other approaches, including the inferno model, and with available experimental data are given. This work allows one to understand the thermodynamic consistency issues in the existing average-atom models. Starting from the case of aluminum, a comparative study of the thermodynamic consistency of the models is proposed. A preliminary study of the validity domain of the inferno model is also included. PMID:21405914

  3. Variational-average-atom-in-quantum-plasmas (VAAQP) code and virial theorem: equation-of-state and shock-Hugoniot calculations for warm dense Al, Fe, Cu, and Pb.

    PubMed

    Piron, R; Blenski, T

    2011-02-01

    The numerical code VAAQP (variational average atom in quantum plasmas), which is based on a fully variational model of equilibrium dense plasmas, is applied to equation-of-state calculations for aluminum, iron, copper, and lead in the warm-dense-matter regime. VAAQP does not impose the neutrality of the Wigner-Seitz ion sphere; it provides the average-atom structure and the mean ionization self-consistently from the solution of the variational equations. The formula used for the electronic pressure is simple and does not require any numerical differentiation. In this paper, the virial theorem is derived in both nonrelativistic and relativistic versions of the model. This theorem allows one to express the electron pressure as a combination of the electron kinetic and interaction energies. It is shown that the model fulfills automatically the virial theorem in the case of local-density approximations to the exchange-correlation free-energy. Applications of the model to the equation-of-state and Hugoniot shock adiabat of aluminum, iron, copper, and lead in the warm-dense-matter regime are presented. Comparisons with other approaches, including the inferno model, and with available experimental data are given. This work allows one to understand the thermodynamic consistency issues in the existing average-atom models. Starting from the case of aluminum, a comparative study of the thermodynamic consistency of the models is proposed. A preliminary study of the validity domain of the inferno model is also included.

  4. Analytical computation of the derivative of PSF for the optimization of phase mask in wavefront coding system.

    PubMed

    Chen, Xinhua; Zhou, Jiankang; Shen, Weimin

    2016-09-01

    Wavefront coding system can realize defocus invariance of PSF/OTF with a phase mask inserting in the pupil plane. Ideally, the derivative of the PSF/OTF with respect to defocus error should be close to zero as much as possible over the extended depth of field/focus for the wavefront coding system. In this paper, we propose an analytical expression for the computation of the derivative of PSF. With this expression, the derivative of PSF based merit function can be used in the optimization of the wavefront coding system with any type of phase mask and aberrations. Computation of the derivative of PSF using the proposed expression and FFT respectively are compared and discussed. We also demonstrate the optimization of a generic polynomial phase mask in wavefront coding system as an example. PMID:27607710

  5. Symmetry-based coding method and synthesis topology optimization design of ultra-wideband polarization conversion metasurfaces

    NASA Astrophysics Data System (ADS)

    Sui, Sai; Ma, Hua; Wang, Jiafu; Feng, Mingde; Pang, Yongqiang; Xia, Song; Xu, Zhuo; Qu, Shaobo

    2016-07-01

    In this letter, we propose the synthesis topology optimization method of designing ultra-wideband polarization conversion metasurface for linearly polarized waves. The general design principle of polarization conversion metasurfaces is derived theoretically. Symmetry-based coding, with shorter coding length and better optimization efficiency, is then proposed. As an example, a topological metasurface is demonstrated with an ultra-wideband polarization conversion property. The results of both simulations and experiments show that the metasurface can convert linearly polarized waves into cross-polarized waves in 8.0-30.0 GHz, obtaining the property of ultra-wideband polarization conversion based on metasurfaces, and hence validating the synthesis design method. The proposed method combines the merits of topology optimization and symmetry-based coding method, which provides an efficient tool for the design of high-performance polarization conversion metasurfaces.

  6. Multiplex iterative plasmid engineering for combinatorial optimization of metabolic pathways and diversification of protein coding sequences.

    PubMed

    Li, Yifan; Gu, Qun; Lin, Zhenquan; Wang, Zhiwen; Chen, Tao; Zhao, Xueming

    2013-11-15

    Engineering complex biological systems typically requires combinatorial optimization to achieve the desired functionality. Here, we present Multiplex Iterative Plasmid Engineering (MIPE), which is a highly efficient and customized method for combinatorial diversification of plasmid sequences. MIPE exploits ssDNA mediated λ Red recombineering for the introduction of mutations, allowing it to target several sites simultaneously and generate libraries of up to 10(7) sequences in one reaction. We also describe "restriction digestion mediated co-selection (RD CoS)", which enables MIPE to produce enhanced recombineering efficiencies with greatly simplified coselection procedures. To demonstrate this approach, we applied MIPE to fine-tune gene expression level in the 5-gene riboflavin biosynthetic pathway and successfully isolated a clone with 2.67-fold improved production in less than a week. We further demonstrated the ability of MIPE for highly multiplexed diversification of protein coding sequence by simultaneously targeting 23 codons scattered along the 750 bp sequence. We anticipate this method to benefit the optimization of diverse biological systems in synthetic biology and metabolic engineering.

  7. An Optimal Pull-Push Scheduling Algorithm Based on Network Coding for Mesh Peer-to-Peer Live Streaming

    NASA Astrophysics Data System (ADS)

    Cui, Laizhong; Jiang, Yong; Wu, Jianping; Xia, Shutao

    Most large-scale Peer-to-Peer (P2P) live streaming systems are constructed as a mesh structure, which can provide robustness in the dynamic P2P environment. The pull scheduling algorithm is widely used in this mesh structure, which degrades the performance of the entire system. Recently, network coding was introduced in mesh P2P streaming systems to improve the performance, which makes the push strategy feasible. One of the most famous scheduling algorithms based on network coding is R2, with a random push strategy. Although R2 has achieved some success, the push scheduling strategy still lacks a theoretical model and optimal solution. In this paper, we propose a novel optimal pull-push scheduling algorithm based on network coding, which consists of two stages: the initial pull stage and the push stage. The main contributions of this paper are: 1) we put forward a theoretical analysis model that considers the scarcity and timeliness of segments; 2) we formulate the push scheduling problem to be a global optimization problem and decompose it into local optimization problems on individual peers; 3) we introduce some rules to transform the local optimization problem into a classical min-cost optimization problem for solving it; 4) We combine the pull strategy with the push strategy and systematically realize our scheduling algorithm. Simulation results demonstrate that decode delay, decode ratio and redundant fraction of the P2P streaming system with our algorithm can be significantly improved, without losing throughput and increasing overhead.

  8. Neural network river forecasting through baseflow separation and binary-coded swarm optimization

    NASA Astrophysics Data System (ADS)

    Taormina, Riccardo; Chau, Kwok-Wing; Sivakumar, Bellie

    2015-10-01

    The inclusion of expert knowledge in data-driven streamflow modeling is expected to yield more accurate estimates of river quantities. Modular models (MMs) designed to work on different parts of the hydrograph are preferred ways to implement such approach. Previous studies have suggested that better predictions of total streamflow could be obtained via modular Artificial Neural Networks (ANNs) trained to perform an implicit baseflow separation. These MMs fit separately the baseflow and excess flow components as produced by a digital filter, and reconstruct the total flow by adding these two signals at the output. The optimization of the filter parameters and ANN architectures is carried out through global search techniques. Despite the favorable premises, the real effectiveness of such MMs has been tested only on a few case studies, and the quality of the baseflow separation they perform has never been thoroughly assessed. In this work, we compare the performance of MM against global models (GMs) for nine different gaging stations in the northern United States. Binary-coded swarm optimization is employed for the identification of filter parameters and model structure, while Extreme Learning Machines, instead of ANN, are used to drastically reduce the large computational times required to perform the experiments. The results show that there is no evidence that MM outperform global GM for predicting the total flow. In addition, the baseflow produced by the MM largely underestimates the actual baseflow component expected for most of the considered gages. This occurs because the values of the filter parameters maximizing overall accuracy do not reflect the geological characteristics of the river basins. The results indeed show that setting the filter parameters according to expert knowledge results in accurate baseflow separation but lower accuracy of total flow predictions, suggesting that these two objectives are intrinsically conflicting rather than compatible.

  9. Optimized hybrid transform coding for very low bit rates: videotelephony communication on personal computer

    NASA Astrophysics Data System (ADS)

    Eude, Gerard; Schmitt, Jean-Claude

    1994-05-01

    This paper describes a `very low bitrate visual telephony application' demonstrator which was designed to be used on the Public Switched Telephony Network for many multimedia purposes. This development was done by CNET in coordination with the european COST211ter project with the aim to demonstrate videotelephony at very low bit rates. The main concern was to optimize a video coding algorithm based on the CCITT H.2161 existing standard and directly derived from the COST211ter simulation model. The different signals which are needed for a videotelephony communication, video, speech, data and control are modulated and transmitted at a bitrate contained between 9.6 kbit/s and 28.8 kbit/s. The description of the demonstrator is given, including video algorithm and system multiplex specifications. The reasons of the choice of the video format and algorithm are also discussed. A friendly software application has been developed to run videotelephony within a Macintosh computer environment. This program uses the QuickTime routines to record and to play the videophone pictures to or from the hard disk. Single pictures or large sequences can be grabbed to the hard disk. Data can also be transmitted by opening, through the audio/video multiplex a data channel of some kbit/s in the video channel, allowing minimal groupwave application.

  10. Performance Modeling and Optimization of a High Energy CollidingBeam Simulation Code

    SciTech Connect

    Shan, Hongzhang; Strohmaier, Erich; Qiang, Ji; Bailey, David H.; Yelick, Kathy

    2006-06-01

    An accurate modeling of the beam-beam interaction is essential to maximizing the luminosity in existing and future colliders. BeamBeam3D was the first parallel code that can be used to study this interaction fully self-consistently on high-performance computing platforms. Various all-to-all personalized communication (AAPC) algorithms dominate its communication patterns, for which we developed a sequence of performance models using a series of micro-benchmarks. We find that for SMP based systems the most important performance constraint is node-adapter contention, while for 3D-Torus topologies good performance models are not possible without considering link contention. The best average model prediction error is very low on SMP based systems with of 3% to 7%. On torus based systems errors of 29% are higher but optimized performance can again be predicted within 8% in some cases. These excellent results across five different systems indicate that this methodology for performance modeling can be applied to a large class of algorithms.

  11. SU-E-T-254: Optimization of GATE and PHITS Monte Carlo Code Parameters for Uniform Scanning Proton Beam Based On Simulation with FLUKA General-Purpose Code

    SciTech Connect

    Kurosu, K; Takashina, M; Koizumi, M; Das, I; Moskvin, V

    2014-06-01

    Purpose: Monte Carlo codes are becoming important tools for proton beam dosimetry. However, the relationships between the customizing parameters and percentage depth dose (PDD) of GATE and PHITS codes have not been reported which are studied for PDD and proton range compared to the FLUKA code and the experimental data. Methods: The beam delivery system of the Indiana University Health Proton Therapy Center was modeled for the uniform scanning beam in FLUKA and transferred identically into GATE and PHITS. This computational model was built from the blue print and validated with the commissioning data. Three parameters evaluated are the maximum step size, cut off energy and physical and transport model. The dependence of the PDDs on the customizing parameters was compared with the published results of previous studies. Results: The optimal parameters for the simulation of the whole beam delivery system were defined by referring to the calculation results obtained with each parameter. Although the PDDs from FLUKA and the experimental data show a good agreement, those of GATE and PHITS obtained with our optimal parameters show a minor discrepancy. The measured proton range R90 was 269.37 mm, compared to the calculated range of 269.63 mm, 268.96 mm, and 270.85 mm with FLUKA, GATE and PHITS, respectively. Conclusion: We evaluated the dependence of the results for PDDs obtained with GATE and PHITS Monte Carlo generalpurpose codes on the customizing parameters by using the whole computational model of the treatment nozzle. The optimal parameters for the simulation were then defined by referring to the calculation results. The physical model, particle transport mechanics and the different geometrybased descriptions need accurate customization in three simulation codes to agree with experimental data for artifact-free Monte Carlo simulation. This study was supported by Grants-in Aid for Cancer Research (H22-3rd Term Cancer Control-General-043) from the Ministry of Health

  12. Dose optimization with first-order total-variation minimization for dense angularly sampled and sparse intensity modulated radiation therapy (DASSIM-RT)

    SciTech Connect

    Kim, Hojin; Li Ruijiang; Lee, Rena; Goldstein, Thomas; Boyd, Stephen; Candes, Emmanuel; Xing Lei

    2012-07-15

    Purpose: A new treatment scheme coined as dense angularly sampled and sparse intensity modulated radiation therapy (DASSIM-RT) has recently been proposed to bridge the gap between IMRT and VMAT. By increasing the angular sampling of radiation beams while eliminating dispensable segments of the incident fields, DASSIM-RT is capable of providing improved conformity in dose distributions while maintaining high delivery efficiency. The fact that DASSIM-RT utilizes a large number of incident beams represents a major computational challenge for the clinical applications of this powerful treatment scheme. The purpose of this work is to provide a practical solution to the DASSIM-RT inverse planning problem. Methods: The inverse planning problem is formulated as a fluence-map optimization problem with total-variation (TV) minimization. A newly released L1-solver, template for first-order conic solver (TFOCS), was adopted in this work. TFOCS achieves faster convergence with less memory usage as compared with conventional quadratic programming (QP) for the TV form through the effective use of conic forms, dual-variable updates, and optimal first-order approaches. As such, it is tailored to specifically address the computational challenges of large-scale optimization in DASSIM-RT inverse planning. Two clinical cases (a prostate and a head and neck case) are used to evaluate the effectiveness and efficiency of the proposed planning technique. DASSIM-RT plans with 15 and 30 beams are compared with conventional IMRT plans with 7 beams in terms of plan quality and delivery efficiency, which are quantified by conformation number (CN), the total number of segments and modulation index, respectively. For optimization efficiency, the QP-based approach was compared with the proposed algorithm for the DASSIM-RT plans with 15 beams for both cases. Results: Plan quality improves with an increasing number of incident beams, while the total number of segments is maintained to be about the

  13. Code Optimization, Frozen Glassy Phase and Improved Decoding Algorithms for Low-Density Parity-Check Codes

    NASA Astrophysics Data System (ADS)

    Huang, Hai-Ping

    2015-01-01

    The statistical physics properties of low-density parity-check codes for the binary symmetric channel are investigated as a spin glass problem with multi-spin interactions and quenched random fields by the cavity method. By evaluating the entropy function at the Nishimori temperature, we find that irregular constructions with heterogeneous degree distribution of check (bit) nodes have higher decoding thresholds compared to regular counterparts with homogeneous degree distribution. We also show that the instability of the mean-field calculation takes place only after the entropy crisis, suggesting the presence of a frozen glassy phase at low temperatures. When no prior knowledge of channel noise is assumed (searching for the ground state), we find that a reinforced strategy on normal belief propagation will boost the decoding threshold to a higher value than the normal belief propagation. This value is close to the dynamical transition where all local search heuristics fail to identify the true message (codeword or the ferromagnetic state). After the dynamical transition, the number of metastable states with larger energy density (than the ferromagnetic state) becomes exponentially numerous. When the noise level of the transmission channel approaches the static transition point, there starts to exist exponentially numerous codewords sharing the identical ferromagnetic energy.

  14. The neural code for auditory space depends on sound frequency and head size in an optimal manner.

    PubMed

    Harper, Nicol S; Scott, Brian H; Semple, Malcolm N; McAlpine, David

    2014-01-01

    A major cue to the location of a sound source is the interaural time difference (ITD)-the difference in sound arrival time at the two ears. The neural representation of this auditory cue is unresolved. The classic model of ITD coding, dominant for a half-century, posits that the distribution of best ITDs (the ITD evoking a neuron's maximal response) is unimodal and largely within the range of ITDs permitted by head-size. This is often interpreted as a place code for source location. An alternative model, based on neurophysiology in small mammals, posits a bimodal distribution of best ITDs with exquisite sensitivity to ITDs generated by means of relative firing rates between the distributions. Recently, an optimal-coding model was proposed, unifying the disparate features of these two models under the framework of efficient coding by neural populations. The optimal-coding model predicts that distributions of best ITDs depend on head size and sound frequency: for high frequencies and large heads it resembles the classic model, for low frequencies and small head sizes it resembles the bimodal model. The optimal-coding model makes key, yet unobserved, predictions: for many species, including humans, both forms of neural representation are employed, depending on sound frequency. Furthermore, novel representations are predicted for intermediate frequencies. Here, we examine these predictions in neurophysiological data from five mammalian species: macaque, guinea pig, cat, gerbil and kangaroo rat. We present the first evidence supporting these untested predictions, and demonstrate that different representations appear to be employed at different sound frequencies in the same species. PMID:25372405

  15. The Neural Code for Auditory Space Depends on Sound Frequency and Head Size in an Optimal Manner

    PubMed Central

    Harper, Nicol S.; Scott, Brian H.; Semple, Malcolm N.; McAlpine, David

    2014-01-01

    A major cue to the location of a sound source is the interaural time difference (ITD)–the difference in sound arrival time at the two ears. The neural representation of this auditory cue is unresolved. The classic model of ITD coding, dominant for a half-century, posits that the distribution of best ITDs (the ITD evoking a neuron’s maximal response) is unimodal and largely within the range of ITDs permitted by head-size. This is often interpreted as a place code for source location. An alternative model, based on neurophysiology in small mammals, posits a bimodal distribution of best ITDs with exquisite sensitivity to ITDs generated by means of relative firing rates between the distributions. Recently, an optimal-coding model was proposed, unifying the disparate features of these two models under the framework of efficient coding by neural populations. The optimal-coding model predicts that distributions of best ITDs depend on head size and sound frequency: for high frequencies and large heads it resembles the classic model, for low frequencies and small head sizes it resembles the bimodal model. The optimal-coding model makes key, yet unobserved, predictions: for many species, including humans, both forms of neural representation are employed, depending on sound frequency. Furthermore, novel representations are predicted for intermediate frequencies. Here, we examine these predictions in neurophysiological data from five mammalian species: macaque, guinea pig, cat, gerbil and kangaroo rat. We present the first evidence supporting these untested predictions, and demonstrate that different representations appear to be employed at different sound frequencies in the same species. PMID:25372405

  16. The neural code for auditory space depends on sound frequency and head size in an optimal manner.

    PubMed

    Harper, Nicol S; Scott, Brian H; Semple, Malcolm N; McAlpine, David

    2014-01-01

    A major cue to the location of a sound source is the interaural time difference (ITD)-the difference in sound arrival time at the two ears. The neural representation of this auditory cue is unresolved. The classic model of ITD coding, dominant for a half-century, posits that the distribution of best ITDs (the ITD evoking a neuron's maximal response) is unimodal and largely within the range of ITDs permitted by head-size. This is often interpreted as a place code for source location. An alternative model, based on neurophysiology in small mammals, posits a bimodal distribution of best ITDs with exquisite sensitivity to ITDs generated by means of relative firing rates between the distributions. Recently, an optimal-coding model was proposed, unifying the disparate features of these two models under the framework of efficient coding by neural populations. The optimal-coding model predicts that distributions of best ITDs depend on head size and sound frequency: for high frequencies and large heads it resembles the classic model, for low frequencies and small head sizes it resembles the bimodal model. The optimal-coding model makes key, yet unobserved, predictions: for many species, including humans, both forms of neural representation are employed, depending on sound frequency. Furthermore, novel representations are predicted for intermediate frequencies. Here, we examine these predictions in neurophysiological data from five mammalian species: macaque, guinea pig, cat, gerbil and kangaroo rat. We present the first evidence supporting these untested predictions, and demonstrate that different representations appear to be employed at different sound frequencies in the same species.

  17. Insertion of operation-and-indicate instructions for optimized SIMD code

    DOEpatents

    Eichenberger, Alexander E; Gara, Alan; Gschwind, Michael K

    2013-06-04

    Mechanisms are provided for inserting indicated instructions for tracking and indicating exceptions in the execution of vectorized code. A portion of first code is received for compilation. The portion of first code is analyzed to identify non-speculative instructions performing designated non-speculative operations in the first code that are candidates for replacement by replacement operation-and-indicate instructions that perform the designated non-speculative operations and further perform an indication operation for indicating any exception conditions corresponding to special exception values present in vector register inputs to the replacement operation-and-indicate instructions. The replacement is performed and second code is generated based on the replacement of the at least one non-speculative instruction. The data processing system executing the compiled code is configured to store special exception values in vector output registers, in response to a speculative instruction generating an exception condition, without initiating exception handling.

  18. Experiences in the Performance Analysis and Optimization of a Deterministic Radiation Transport Code on the Cray SV1

    SciTech Connect

    Peter Cebull

    2004-05-01

    The Attila radiation transport code, which solves the Boltzmann neutron transport equation on three-dimensional unstructured tetrahedral meshes, was ported to a Cray SV1. Cray's performance analysis tools pointed to two subroutines that together accounted for 80%-90% of the total CPU time. Source code modifications were performed to enable vectorization of the most significant loops, to correct unfavorable strides through memory, and to replace a conjugate gradient solver subroutine with a call to the Cray Scientific Library. These optimizations resulted in a speedup of 7.79 for the INEEL's largest ATR model. Parallel scalability of the OpenMP version of the code is also discussed, and timing results are given for other non-vector platforms.

  19. Anode optimization for miniature electronic brachytherapy X-ray sources using Monte Carlo and computational fluid dynamic codes.

    PubMed

    Khajeh, Masoud; Safigholi, Habib

    2016-03-01

    A miniature X-ray source has been optimized for electronic brachytherapy. The cooling fluid for this device is water. Unlike the radionuclide brachytherapy sources, this source is able to operate at variable voltages and currents to match the dose with the tumor depth. First, Monte Carlo (MC) optimization was performed on the tungsten target-buffer thickness layers versus energy such that the minimum X-ray attenuation occurred. Second optimization was done on the selection of the anode shape based on the Monte Carlo in water TG-43U1 anisotropy function. This optimization was carried out to get the dose anisotropy functions closer to unity at any angle from 0° to 170°. Three anode shapes including cylindrical, spherical, and conical were considered. Moreover, by Computational Fluid Dynamic (CFD) code the optimal target-buffer shape and different nozzle shapes for electronic brachytherapy were evaluated. The characterization criteria of the CFD were the minimum temperature on the anode shape, cooling water, and pressure loss from inlet to outlet. The optimal anode was conical in shape with a conical nozzle. Finally, the TG-43U1 parameters of the optimal source were compared with the literature. PMID:26966563

  20. Approaches in highly parameterized inversion - PEST++, a Parameter ESTimation code optimized for large environmental models

    USGS Publications Warehouse

    Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.

    2012-01-01

    An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.

  1. GPU-optimized Code for Long-term Simulations of Beam-beam Effects in Colliders

    SciTech Connect

    Roblin, Yves; Morozov, Vasiliy; Terzic, Balsa; Aturban, Mohamed A.; Ranjan, D.; Zubair, Mohammed

    2013-06-01

    We report on the development of the new code for long-term simulation of beam-beam effects in particle colliders. The underlying physical model relies on a matrix-based arbitrary-order symplectic particle tracking for beam transport and the Bassetti-Erskine approximation for beam-beam interaction. The computations are accelerated through a parallel implementation on a hybrid GPU/CPU platform. With the new code, a previously computationally prohibitive long-term simulations become tractable. We use the new code to model the proposed medium-energy electron-ion collider (MEIC) at Jefferson Lab.

  2. Optimization of GATE and PHITS Monte Carlo code parameters for spot scanning proton beam based on simulation with FLUKA general-purpose code

    NASA Astrophysics Data System (ADS)

    Kurosu, Keita; Das, Indra J.; Moskvin, Vadim P.

    2016-01-01

    Spot scanning, owing to its superior dose-shaping capability, provides unsurpassed dose conformity, in particular for complex targets. However, the robustness of the delivered dose distribution and prescription has to be verified. Monte Carlo (MC) simulation has the potential to generate significant advantages for high-precise particle therapy, especially for medium containing inhomogeneities. However, the inherent choice of computational parameters in MC simulation codes of GATE, PHITS and FLUKA that is observed for uniform scanning proton beam needs to be evaluated. This means that the relationship between the effect of input parameters and the calculation results should be carefully scrutinized. The objective of this study was, therefore, to determine the optimal parameters for the spot scanning proton beam for both GATE and PHITS codes by using data from FLUKA simulation as a reference. The proton beam scanning system of the Indiana University Health Proton Therapy Center was modeled in FLUKA, and the geometry was subsequently and identically transferred to GATE and PHITS. Although the beam transport is managed by spot scanning system, the spot location is always set at the center of a water phantom of 600 × 600 × 300 mm3, which is placed after the treatment nozzle. The percentage depth dose (PDD) is computed along the central axis using 0.5 × 0.5 × 0.5 mm3 voxels in the water phantom. The PDDs and the proton ranges obtained with several computational parameters are then compared to those of FLUKA, and optimal parameters are determined from the accuracy of the proton range, suppressed dose deviation, and computational time minimization. Our results indicate that the optimized parameters are different from those for uniform scanning, suggesting that the gold standard for setting computational parameters for any proton therapy application cannot be determined consistently since the impact of setting parameters depends on the proton irradiation technique. We

  3. [Non elective cesarean section: use of a color code to optimize management of obstetric emergencies].

    PubMed

    Rudigoz, René-Charles; Huissoud, Cyril; Delecour, Lisa; Thevenet, Simone; Dupont, Corinne

    2014-06-01

    The medical team of the Croix Rousse teaching hospital maternity unit has developed, over the last ten years, a set of procedures designed to respond to various emergency situations necessitating Caesarean section. Using the Lucas classification, we have defined as precisely as possible the degree of urgency of Caesarian sections. We have established specific protocols for the implementation of urgent and very urgent Caesarean section and have chosen a simple means to convey the degree of urgency to all team members, namely a color code system (red, orange and green). We have set time goals from decision to delivery: 15 minutes for the red code and 30 minutes for the orange code. The results seem very positive: The frequency of urgent and very urgent Caesareans has fallen over time, from 6.1 % to 1.6% in 2013. The average time from decision to delivery is 11 minutes for code red Caesareans and 21 minutes for code orange Caesareans. These time goals are now achieved in 95% of cases. Organizational and anesthetic difficulties are the main causes of delays. The indications for red and orange code Caesarians are appropriate more than two times out of three. Perinatal outcomes are generally favorable, code red Caesarians being life-saving in 15% of cases. No increase in maternal complications has been observed. In sum: Each obstetric department should have its own protocols for handling urgent and very urgent Caesarean sections. Continuous monitoring of their implementation, relevance and results should be conducted Management of extreme urgency must be integrated into the management of patients with identified risks (scarred uterus and twin pregnancies for example), and also in structures without medical facilities (birthing centers). Obstetric teams must keep in mind that implementation of these protocols in no way dispenses with close monitoring of labour. PMID:26983190

  4. DENSE MEDIUM CYCLONE OPTIMIZATON

    SciTech Connect

    Gerald H. Luttrell; Chris J. Barbee; Peter J. Bethell; Chris J. Wood

    2005-06-30

    Dense medium cyclones (DMCs) are known to be efficient, high-tonnage devices suitable for upgrading particles in the 50 to 0.5 mm size range. This versatile separator, which uses centrifugal forces to enhance the separation of fine particles that cannot be upgraded in static dense medium separators, can be found in most modern coal plants and in a variety of mineral plants treating iron ore, dolomite, diamonds, potash and lead-zinc ores. Due to the high tonnage, a small increase in DMC efficiency can have a large impact on plant profitability. Unfortunately, the knowledge base required to properly design and operate DMCs has been seriously eroded during the past several decades. In an attempt to correct this problem, a set of engineering tools have been developed to allow producers to improve the efficiency of their DMC circuits. These tools include (1) low-cost density tracers that can be used by plant operators to rapidly assess DMC performance, (2) mathematical process models that can be used to predict the influence of changes in operating and design variables on DMC performance, and (3) an expert advisor system that provides plant operators with a user-friendly interface for evaluating, optimizing and trouble-shooting DMC circuits. The field data required to develop these tools was collected by conducting detailed sampling and evaluation programs at several industrial plant sites. These data were used to demonstrate the technical, economic and environmental benefits that can be realized through the application of these engineering tools.

  5. Program user's manual for optimizing the design of a liquid or gaseous propellant rocket engine with the automated combustor design code AUTOCOM

    NASA Technical Reports Server (NTRS)

    Reichel, R. H.; Hague, D. S.; Jones, R. T.; Glatt, C. R.

    1973-01-01

    This computer program manual describes in two parts the automated combustor design optimization code AUTOCOM. The program code is written in the FORTRAN 4 language. The input data setup and the program outputs are described, and a sample engine case is discussed. The program structure and programming techniques are also described, along with AUTOCOM program analysis.

  6. A study of the optimization method used in the NAVY/NASA gas turbine engine computer code

    NASA Technical Reports Server (NTRS)

    Horsewood, J. L.; Pines, S.

    1977-01-01

    Sources of numerical noise affecting the convergence properties of the Powell's Principal Axis Method of Optimization in the NAVY/NASA gas turbine engine computer code were investigated. The principal noise source discovered resulted from loose input tolerances used in terminating iterations performed in subroutine CALCFX to satisfy specified control functions. A minor source of noise was found to be introduced by an insufficient number of digits in stored coefficients used by subroutine THERM in polynomial expressions of thermodynamic properties. Tabular results of several computer runs are presented to show the effects on program performance of selective corrective actions taken to reduce noise.

  7. Design, decoding and optimized implementation of SECDED codes over GF(q)

    DOEpatents

    Ward, H Lee; Ganti, Anand; Resnick, David R

    2014-06-17

    A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.

  8. Decoding and optimized implementation of SECDED codes over GF(q)

    DOEpatents

    Ward, H Lee; Ganti, Anand; Resnick, David R

    2014-11-18

    A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.

  9. Decoding and optimized implementation of SECDED codes over GF(q)

    DOEpatents

    Ward, H. Lee; Ganti, Anand; Resnick, David R

    2013-10-22

    A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.

  10. Optimization of a photoneutron source based on 10 MeV electron beam using Geant4 Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Askri, Boubaker

    2015-10-01

    Geant4 Monte Carlo code has been used to conceive and optimize a simple and compact neutron source based on a 10 MeV electron beam impinging on a tungsten target adjoined to a beryllium target. For this purpose, a precise photonuclear reaction cross-section model issued from the International Atomic Energy Agency (IAEA) database was linked to Geant4 to accurately simulate the interaction of low energy bremsstrahlung photons with beryllium material. A benchmark test showed that a good agreement was achieved when comparing the emitted neutron flux spectra predicted by Geant4 and Fluka codes for a beryllium cylinder bombarded with a 5 MeV photon beam. The source optimization was achieved through a two stage Monte Carlo simulation. In the first stage, the distributions of the seven phase space coordinates of the bremsstrahlung photons at the boundaries of the tungsten target were determined. In the second stage events corresponding to photons emitted according to these distributions were tracked. A neutron yield of 4.8 × 1010 neutrons/mA/s was obtained at 20 cm from the beryllium target. A thermal neutron yield of 1.5 × 109 neutrons/mA/s was obtained after introducing a spherical shell of polyethylene as a neutron moderator.

  11. Image gathering, coding, and processing: End-to-end optimization for efficient and robust acquisition of visual information

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; Fales, Carl L.

    1990-01-01

    Researchers are concerned with the end-to-end performance of image gathering, coding, and processing. The applications range from high-resolution television to vision-based robotics, wherever the resolution, efficiency and robustness of visual information acquisition and processing are critical. For the presentation at this workshop, it is convenient to divide research activities into the following two overlapping areas: The first is the development of focal-plane processing techniques and technology to effectively combine image gathering with coding, with an emphasis on low-level vision processing akin to the retinal processing in human vision. The approach includes the familiar Laplacian pyramid, the new intensity-dependent spatial summation, and parallel sensing/processing networks. Three-dimensional image gathering is attained by combining laser ranging with sensor-array imaging. The second is the rigorous extension of information theory and optimal filtering to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing.

  12. Regional bit allocation and rate distortion optimization for multiview depth video coding with view synthesis distortion model.

    PubMed

    Zhang, Yun; Kwong, Sam; Xu, Long; Hu, Sudeng; Jiang, Gangyi; Kuo, C-C Jay

    2013-09-01

    In this paper, we propose a view synthesis distortion model (VSDM) that establishes the relationship between depth distortion and view synthesis distortion for the regions with different characteristics: color texture area corresponding depth (CTAD) region and color smooth area corresponding depth (CSAD), respectively. With this VSDM, we propose regional bit allocation (RBA) and rate distortion optimization (RDO) algorithms for multiview depth video coding (MDVC) by allocating more bits on CTAD for rendering quality and fewer bits on CSAD for compression efficiency. Experimental results show that the proposed VSDM based RBA and RDO can improve the coding efficiency significantly for the test sequences. In addition, for the proposed overall MDVC algorithm that integrates VSDM based RBA and RDO, it achieves 9.99% and 14.51% bit rate reduction on average for the high and low bit rate, respectively. It can improve virtual view image quality 0.22 and 0.24 dB on average at the high and low bit rate, respectively, when compared with the original joint multiview video coding model. The RD performance comparisons using five different metrics also validate the effectiveness of the proposed overall algorithm. In addition, the proposed algorithms can be applied to both INTRA and INTER frames.

  13. Integer-linear-programing optimization in scalable video multicast with adaptive modulation and coding in wireless networks.

    PubMed

    Lee, Dongyul; Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.

  14. Integer-Linear-Programing Optimization in Scalable Video Multicast with Adaptive Modulation and Coding in Wireless Networks

    PubMed Central

    Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm. PMID:25276862

  15. Performance of an Optimized Eta Model Code on the Cray T3E and a Network of PCs

    NASA Technical Reports Server (NTRS)

    Kouatchou, Jules; Rancic, Miodrag; Geiger, Jim

    2000-01-01

    In the year 2001, NASA will launch the satellite TRIANA that will be the first Earth observing mission to provide a continuous, full disk view of the sunlit Earth. As a part of the HPCC Program at NASA GSFC, we have started a project whose objectives are to develop and implement a 3D cloud data assimilation system, by combining TRIANA measurements with model simulation, and to produce accurate statistics of global cloud coverage as an important element of the Earth's climate. For simulation of the atmosphere within this project we are using the NCEP/NOAA operational Eta model. In order to compare TRIANA and the Eta model data on approximately the same grid without significant downscaling, the Eta model will be integrated at a resolution of about 15 km. The integration domain (from -70 to +70 deg in latitude and 150 deg in longitude) will cover most of the sunlit Earth disc and will continuously rotate around the globe following TRIANA. The cloud data assimilation is supposed to run and produce 3D clouds on a near real-time basis. Such a numerical setup and integration design is very ambitious and computationally demanding. Thus, though the Eta model code has been very carefully developed and its computational efficiency has been systematically polished during the years of operational implementation at NCEP, the current MPI version may still have problems with memory and efficiency for the TRIANA simulations. Within this work, we optimize a parallel version of the Eta model code on a Cray T3E and a network of PCs (theHIVE) in order to improve its overall efficiency. Our optimization procedure consists of introducing dynamically allocated arrays to reduce the size of static memory, and optimizing on a single processor by splitting loops to limit the number of streams. All the presented results are derived using an integration domain centered at the equator, with a size of 60 x 60 deg, and with horizontal resolutions of 1/2 and 1/3 deg, respectively. In accompanying

  16. Optimized neural coding? Control mechanisms in large cortical networks implemented by connectivity changes

    PubMed Central

    Cross, Katy A.; Iacoboni, Marco

    2011-01-01

    Using functional magnetic resonance imaging, we show that a distributed fronto-parietal visuomotor integration network is recruited to overcome automatic responses to both biological and non-biological cues. Activity levels in these areas are similar for both cue types. The functional connectivity of this network, however, reveals differential coupling with thalamus and precuneus (biological cues) and extrastriate cortex (non biological cues). This suggests that a set of cortical areas equally activated in two tasks may accomplish task goals differently depending on their network interactions. This supports models of brain organization that emphasize efficient coding through changing patterns of integration between regions of specialized function. PMID:21976418

  17. Nonbinary Quantum Convolutional Codes Derived from Negacyclic Codes

    NASA Astrophysics Data System (ADS)

    Chen, Jianzhang; Li, Jianping; Yang, Fan; Huang, Yuanyuan

    2015-01-01

    In this paper, some families of nonbinary quantum convolutional codes are constructed by using negacyclic codes. These nonbinary quantum convolutional codes are different from quantum convolutional codes in the literature. Moreover, we construct a family of optimal quantum convolutional codes.

  18. Optimized FIR filters for digital pulse compression of biphase codes with low sidelobes

    NASA Astrophysics Data System (ADS)

    Sanal, M.; Kuloor, R.; Sagayaraj, M. J.

    In miniaturized radars where power, real estate, speed and low cost are tight constraints and Doppler tolerance is not a major concern biphase codes are popular and FIR filter is used for digital pulse compression (DPC) implementation to achieve required range resolution. Disadvantage of low peak to sidelobe ratio (PSR) of biphase codes can be overcome by linear programming for either single stage mismatched filter or two stage approach i.e. matched filter followed by sidelobe suppression filter (SSF) filter. Linear programming (LP) calls for longer filter lengths to obtain desirable PSR. Longer the filter length greater will be the number of multipliers, hence more will be the requirement of logic resources used in the FPGAs and many time becomes design challenge for system on chip (SoC) requirement. This requirement of multipliers can be brought down by clustering the tap weights of the filter by kmeans clustering algorithm at the cost of few dB deterioration in PSR. The cluster centroid as tap weight reduces logic used in FPGA for FIR filters to a great extent by reducing number of weight multipliers. Since k-means clustering is an iterative algorithm, centroid for weights cluster is different in different iterations and causes different clusters. This causes difference in clustering of weights and sometimes even it may happen that lesser number of multiplier and lesser length of filter provide better PSR.

  19. DAMOCLES - A Monte Carlo Code for the Design Optimization of the OMNIS Detectors

    NASA Astrophysics Data System (ADS)

    Zach, Juergen J.; Stj. Murphy, Alexander; Marriott, Darin; Boyd, Richard N.

    2000-10-01

    In support of the proposal for the Observatory for Multivflavor NeutrIno Oscillations, a Monte Carlo code has been developed to realistically simulate the operation of the planned detectors. OMNIS is based on the detection of neutrons emitted from nuclei excited by neutrinos from a supernova burst interacting with Pb- or Fe-nuclei. This is accomplished using Gd-loaded liquid scintillator. Results for the optimum configuration for the modules with respect to cost-efficiency are presented. The results show that the amount of data to be processed by a software trigger can be reduced to the <10kHz region and that a neutron, once produced in the detector, can be detected and identified as originating from a neutrino spallation event with an efficiency of above 30the effects of radioactive impurities in the detector materials used.

  20. MagRad: A code to optimize the operation of superconducting magnets in a radiation environment

    SciTech Connect

    Yeaw, C.T.

    1995-12-31

    A powerful computational tool, called MagRad, has been developed which optimizes magnet design for operation in radiation fields. Specifically, MagRad has been used for the analysis and design modification of the cable-in-conduit conductors of the TF magnet systems in fusion reactor designs. Since the TF magnets must operate in a radiation environment which damages the material components of the conductor and degrades their performance, the optimization of conductor design must account not only for start-up magnet performance, but also shut-down performance. The degradation in performance consists primarily of three effects: reduced stability margin of the conductor; a transition out of the well-cooled operating regime; and an increased maximum quench temperature attained in the conductor. Full analysis of the magnet performance over the lifetime of the reactor includes: radiation damage to the conductor, stability, protection, steady state heat removal, shielding effectiveness, optimal annealing schedules, and finally costing of the magnet and reactor. Free variables include primary and secondary conductor geometric and compositional parameters, as well as fusion reactor parameters. A means of dealing with the radiation damage to the conductor, namely high temperature superconductor anneals, is proposed, examined, and demonstrated to be both technically feasible and cost effective. Additionally, two relevant reactor designs (ITER CDA and ARIES-II/IV) have been analyzed. Upon addition of pure copper strands to the cable, the ITER CDA TF magnet design was found to be marginally acceptable, although much room for both performance improvement and cost reduction exists. A cost reduction of 10-15% of the capital cost of the reactor can be achieved by adopting a suitable superconductor annealing schedule. In both of these reactor analyses, the performance predictive capability of MagRad and its associated costing techniques have been demonstrated.

  1. Combined optimal quantization and lossless coding of digital holograms of three-dimensional objects

    NASA Astrophysics Data System (ADS)

    Shortt, Alison E.; Naughton, Thomas J.; Javidi, Bahram

    2006-10-01

    Digital holography is an inherently three-dimensional (3D) technique for the capture of real-world objects. Many existing 3D imaging and processing techniques are based on the explicit combination of several 2D perspectives (or light stripes, etc.) through digital image processing. The advantage of recording a hologram is that multiple 2D perspectives can be optically combined in parallel, and in a constant number of steps independent of the hologram size. Although holography and its capabilities have been known for many decades, it is only very recently that digital holography has been practically investigated due to the recent development of megapixel digital sensors with sufficient spatial resolution and dynamic range. The applications of digital holography could include 3D television, virtual reality, and medical imaging. If these applications are realized, compression standards will have to be defined. We outline the techniques that have been proposed to date for the compression of digital hologram data and show that they are comparable to the performance of what in communication theory is known as optimal signal quantization. We adapt the optimal signal quantization technique to complex-valued 2D signals. The technique relies on knowledge of the histograms of real and imaginary values in the digital holograms. Our digital holograms of 3D objects are captured using phase-shift interferometry. We complete the compression procedure by applying lossless techniques to the quantized holographic pixels.

  2. BMI optimization by using parallel UNDX real-coded genetic algorithm with Beowulf cluster

    NASA Astrophysics Data System (ADS)

    Handa, Masaya; Kawanishi, Michihiro; Kanki, Hiroshi

    2007-12-01

    This paper deals with the global optimization algorithm of the Bilinear Matrix Inequalities (BMIs) based on the Unimodal Normal Distribution Crossover (UNDX) GA. First, analyzing the structure of the BMIs, the existence of the typical difficult structures is confirmed. Then, in order to improve the performance of algorithm, based on results of the problem structures analysis and consideration of BMIs characteristic properties, we proposed the algorithm using primary search direction with relaxed Linear Matrix Inequality (LMI) convex estimation. Moreover, in these algorithms, we propose two types of evaluation methods for GA individuals based on LMI calculation considering BMI characteristic properties more. In addition, in order to reduce computational time, we proposed parallelization of RCGA algorithm, Master-Worker paradigm with cluster computing technique.

  3. ROCOPT: A user friendly interactive code to optimize rocket structural components

    NASA Technical Reports Server (NTRS)

    Rule, William K.

    1989-01-01

    ROCOPT is a user-friendly, graphically-interfaced, microcomputer-based computer program (IBM compatible) that optimizes rocket components by minimizing the structural weight. The rocket components considered are ring stiffened truncated cones and cylinders. The applied loading is static, and can consist of any combination of internal or external pressure, axial force, bending moment, and torque. Stress margins are calculated by means of simple closed form strength of material type equations. Stability margins are determined by approximate, orthotropic-shell, closed-form equations. A modified form of Powell's method, in conjunction with a modified form of the external penalty method, is used to determine the minimum weight of the structure subject to stress and stability margin constraints, as well as user input constraints on the structural dimensions. The graphical interface guides the user through the required data prompts, explains program options and graphically displays results for easy interpretation.

  4. Steps towards verification and validation of the Fetch code for Level 2 analysis, design, and optimization of aqueous homogeneous reactors

    SciTech Connect

    Nygaard, E. T.; Pain, C. C.; Eaton, M. D.; Gomes, J. L. M. A.; Goddard, A. J. H.; Gorman, G.; Tollit, B.; Buchan, A. G.; Cooling, C. M.; Angelo, P. L.

    2012-07-01

    Babcock and Wilcox Technical Services Group (B and W) has identified aqueous homogeneous reactors (AHRs) as a technology well suited to produce the medical isotope molybdenum 99 (Mo-99). AHRs have never been specifically designed or built for this specialized purpose. However, AHRs have a proven history of being safe research reactors. In fact, in 1958, AHRs had 'a longer history of operation than any other type of research reactor using enriched fuel' and had 'experimentally demonstrated to be among the safest of all various type of research reactor now in use [1].' While AHRs have been modeled effectively using simplified 'Level 1' tools, the complex interactions between fluids, neutronics, and solid structures are important (but not necessarily safety significant). These interactions require a 'Level 2' modeling tool. Imperial College London (ICL) has developed such a tool: Finite Element Transient Criticality (FETCH). FETCH couples the radiation transport code EVENT with the computational fluid dynamics code (Fluidity), the result is a code capable of modeling sub-critical, critical, and super-critical solutions in both two-and three-dimensions. Using FETCH, ICL researchers and B and W engineers have studied many fissioning solution systems include the Tokaimura criticality accident, the Y12 accident, SILENE, TRACY, and SUPO. These modeling efforts will ultimately be incorporated into FETCH'S extensive automated verification and validation (V and V) test suite expanding FETCH'S area of applicability to include all relevant physics associated with AHRs. These efforts parallel B and W's engineering effort to design and optimize an AHR to produce Mo99. (authors)

  5. Optimizing color fidelity for display devices using contour phase predictive coding for text, graphics, and video content

    NASA Astrophysics Data System (ADS)

    Lebowsky, Fritz

    2013-02-01

    High-end monitors and TVs based on LCD technology continue to increase their native display resolution to 4k2k and beyond. Subsequently, uncompressed pixel data transmission becomes costly when transmitting over cable or wireless communication channels. For motion video content, spatial preprocessing from YCbCr 444 to YCbCr 420 is widely accepted. However, due to spatial low pass filtering in horizontal and vertical direction, quality and readability of small text and graphics content is heavily compromised when color contrast is high in chrominance channels. On the other hand, straight forward YCbCr 444 compression based on mathematical error coding schemes quite often lacks optimal adaptation to visually significant image content. Therefore, we present the idea of detecting synthetic small text fonts and fine graphics and applying contour phase predictive coding for improved text and graphics rendering at the decoder side. Using a predictive parametric (text) contour model and transmitting correlated phase information in vector format across all three color channels combined with foreground/background color vectors of a local color map promises to overcome weaknesses in compression schemes that process luminance and chrominance channels separately. The residual error of the predictive model is being minimized more easily since the decoder is an integral part of the encoder. A comparative analysis based on some competitive solutions highlights the effectiveness of our approach, discusses current limitations with regard to high quality color rendering, and identifies remaining visual artifacts.

  6. COOH-terminal isoleucine of lysosome-associated membrane protein-1 is optimal for its efficient targeting to dense secondary lysosomes.

    PubMed

    Akasaki, Kenji; Suenobu, Michihisa; Mukaida, Maki; Michihara, Akihiro; Wada, Ikuo

    2010-12-01

    Lysosome-associated membrane protein-1 (LAMP-1) consists of a highly glycosylated luminal domain, a single-transmembrane domain and a short cytoplasmic tail that possesses a lysosome-targeting signal (GYQTI(382)) at the COOH terminus. It is hypothesized that the COOH-terminal isoleucine, I(382), could be substituted with any other bulky hydrophobic amino acid residue for LAMP-1 to exclusively localize in lysosomes. In order to test this hypothesis, we compared subcellular distribution of four substitution mutants with phenylalanine, leucine, methionine and valine at the COOH-terminus (termed I382F, I382L, I382M and I382V, respectively) with that of wild-type (WT)-LAMP-1. Double-labelled immunofluorescence analyses showed that these substitution mutants were localized as significantly to late endocytic organelles as WT-LAMP-1. However, the quantitative subcellular fractionation study revealed different distribution of WT-LAMP-1 and these four COOH-terminal mutants in late endosomes and dense secondary lysosomes. WT-LAMP-1 was accumulated three to six times more in the dense lysosomal fraction than the four mutants. The level of WT-LAMP-1 in late endosomal fraction was comparable to those of I382F, I382M and I382V. Conversely, I382L in the late endosomal fraction was approximately three times more abundant than WT-LAMP-1. These findings define the presence of isoleucine residue at the COOH-terminus of LAMP-1 as critical in governing its efficient delivery to secondary lysosomes and its ratio of lysosomes to late endosomes.

  7. Dense with Sense

    NASA Astrophysics Data System (ADS)

    Aletras, Anthony H.; Ingkanisorn, W. Patricia; Mancini, Christine; Arai, Andrew E.

    2005-09-01

    Displacement encoding with stimulated echoes (DENSE) with a low encoding strength phase-cycled meta-DENSE readout and a two fold SENSE acceleration ( R = 2) is described. This combination reduces total breath-hold times for increased patient comfort during cardiac regional myocardial contractility studies. Images from phantoms, normal volunteers, and a patient are provided to demonstrate the SENSE-DENSE combination of methods. The overall breath-hold time is halved while preserving strain map quality.

  8. Laser-induced fusion in ultra-dense deuterium D(-1): Optimizing MeV particle emission by carrier material selection

    NASA Astrophysics Data System (ADS)

    Holmlid, Leif

    2013-02-01

    Power generation by laser-induced nuclear fusion in ultra-dense deuterium D(-1) requires that the carrier material interacts correctly with D(-1) prior to the laser pulse and also during the laser pulse. In previous studies, the interaction between the superfluid D(-1) layer and various carrier materials prior to the laser pulse has been investigated. It was shown that organic polymer materials do not give a condensed D(-1) layer. Metal surfaces carry thicker D(-1) layers useful for fusion. Here, the interaction between the carrier and the nuclear fusion process is investigated by observing the MeV particle emission (e.g. 14 MeV protons) using twelve different carrier materials and two different methods of detection. Several factors have been analyzed for the performance of the carrier materials: the hardness and the melting point of the material, and the chemical properties of the surface layer. The best performance is found for the high-melting metals Ti and Ta, but also Cu performs well as carrier despite its low melting point. The unexpectedly meager performance of Ni and Ir may be due to their catalytic activity towards hydrogen which may give atomic association to deuterium molecules at the low D2 pressure used.

  9. Atoms in dense plasmas

    SciTech Connect

    More, R.M.

    1986-01-01

    Recent experiments with high-power pulsed lasers have strongly encouraged the development of improved theoretical understanding of highly charged ions in a dense plasma environment. This work examines the theory of dense plasmas with emphasis on general rules which govern matter at extreme high temperature and density. 106 refs., 23 figs.

  10. Simultaneously sparse and low-rank hyperspectral image recovery from coded aperture compressive measurements via convex optimization

    NASA Astrophysics Data System (ADS)

    Gélvez, Tatiana C.; Rueda, Hoover F.; Arguello, Henry

    2016-05-01

    A hyperspectral image (HSI) can be described as a set of images with spatial information across different spectral bands. Compressive spectral imaging techniques (CSI) permit to capture a 3-dimensional hyperspectral scene using 2 dimensional coded and multiplexed projections. Recovering the original scene from a very few projections can be valuable in applications such as remote sensing, video surveillance and biomedical imaging. Typically, HSI exhibit high correlations both, in the spatial and spectral dimensions. Thus, exploiting these correlations allows to accurately recover the original scene from compressed measurements. Traditional approaches exploit the sparsity of the scene when represented in a proper basis. For this purpose, an optimization problem that seeks to minimize a joint l2 - l1 norm is solved to obtain the original scene. However, there exist some HSI with an important feature which does not have been widely exploited; HSI are commonly low rank, thus only a few number of spectral signatures are presented in the image. Therefore, this paper proposes an approach to recover a simultaneous sparse and low rank hyperspectral image by exploiting both features at the same time. The proposed approach solves an optimization problem that seeks to minimize the l2-norm, penalized by the l1-norm, to force the solution to be sparse, and penalized by the nuclear norm to force the solution to be low rank. Theoretical analysis along with a set of simulations over different data sets show that simultaneously exploiting low rank and sparse structures enhances the performance of the recovery algorithm and the quality of the recovered image with an average improvement of around 3 dB in terms of the peak-signal to noise ratio (PSNR).

  11. Real-time photoacoustic and ultrasound dual-modality imaging system facilitated with graphics processing unit and code parallel optimization

    NASA Astrophysics Data System (ADS)

    Yuan, Jie; Xu, Guan; Yu, Yao; Zhou, Yu; Carson, Paul L.; Wang, Xueding; Liu, Xiaojun

    2013-08-01

    Photoacoustic tomography (PAT) offers structural and functional imaging of living biological tissue with highly sensitive optical absorption contrast and excellent spatial resolution comparable to medical ultrasound (US) imaging. We report the development of a fully integrated PAT and US dual-modality imaging system, which performs signal scanning, image reconstruction, and display for both photoacoustic (PA) and US imaging all in a truly real-time manner. The back-projection (BP) algorithm for PA image reconstruction is optimized to reduce the computational cost and facilitate parallel computation on a state of the art graphics processing unit (GPU) card. For the first time, PAT and US imaging of the same object can be conducted simultaneously and continuously, at a real-time frame rate, presently limited by the laser repetition rate of 10 Hz. Noninvasive PAT and US imaging of human peripheral joints in vivo were achieved, demonstrating the satisfactory image quality realized with this system. Another experiment, simultaneous PAT and US imaging of contrast agent flowing through an artificial vessel, was conducted to verify the performance of this system for imaging fast biological events. The GPU-based image reconstruction software code for this dual-modality system is open source and available for download from http://sourceforge.net/projects/patrealtime.

  12. A real-time photoacoustic and ultrasound dual-modality imaging system facilitated with GPU and code parallel optimization

    NASA Astrophysics Data System (ADS)

    Yuan, Jie; Xu, Guan; Yu, Yao; Zhou, Yu; Carson, Paul L.; Wang, Xueding; Liu, Xiaojun

    2014-03-01

    Photoacoustic tomography (PAT) offers structural and functional imaging of living biological tissue with highly sensitive optical absorption contrast and excellent spatial resolution comparable to medical ultrasound (US) imaging. We report the development of a fully integrated PAT and US dual-modality imaging system, which performs signal scanning, image reconstruction and display for both photoacoustic (PA) and US imaging all in a truly real-time manner. The backprojection (BP) algorithm for PA image reconstruction is optimized to reduce the computational cost and facilitate parallel computation on a state of the art graphics processing unit (GPU) card. For the first time, PAT and US imaging of the same object can be conducted simultaneously and continuously, at a real time frame rate, presently limited by the laser repetition rate of 10 Hz. Noninvasive PAT and US imaging of human peripheral joints in vivo were achieved, demonstrating the satisfactory image quality realized with this system. Another experiment, simultaneous PAT and US imaging of contrast agent flowing through an artificial vessel was conducted to verify the performance of this system for imaging fast biological events. The GPU based image reconstruction software code for this dual-modality system is open source and available for download from http://sourceforge.net/projects/pat realtime .

  13. User's guide for the BNW-III optimization code for modular dry/wet-cooled power plants

    SciTech Connect

    Braun, D.J.; Faletti, D.W.

    1984-09-01

    This user's guide describes BNW-III, a computer code developed by the Pacific Northwest Laboratory (PNL) as part of the Dry Cooling Enhancement Program sponsored by the US Department of Energy (DOE). The BNW-III code models a modular dry/wet cooling system for a nuclear or fossil fuel power plant. The purpose of this guide is to give the code user a brief description of what the BNW-III code is and how to use it. It describes the cooling system being modeled and the various models used. A detailed description of code input and code output is also included. The BNW-III code was developed to analyze a specific cooling system layout. However, there is a large degree of freedom in the type of cooling modules that can be selected and in the performance of those modules. The costs of the modules are input to the code, giving the user a great deal of flexibility.

  14. Kinetic Simulations of Dense Plasma Focus Breakdown

    NASA Astrophysics Data System (ADS)

    Schmidt, A.; Higginson, D. P.; Jiang, S.; Link, A.; Povilus, A.; Sears, J.; Bennett, N.; Rose, D. V.; Welch, D. R.

    2015-11-01

    A dense plasma focus (DPF) device is a type of plasma gun that drives current through a set of coaxial electrodes to assemble gas inside the device and then implode that gas on axis to form a Z-pinch. This implosion drives hydrodynamic and kinetic instabilities that generate strong electric fields, which produces a short intense pulse of x-rays, high-energy (>100 keV) electrons and ions, and (in deuterium gas) neutrons. A strong factor in pinch performance is the initial breakdown and ionization of the gas along the insulator surface separating the two electrodes. The smoothness and isotropy of this ionized sheath are imprinted on the current sheath that travels along the electrodes, thus making it an important portion of the DPF to both understand and optimize. Here we use kinetic simulations in the Particle-in-cell code LSP to model the breakdown. Simulations are initiated with neutral gas and the breakdown modeled self-consistently as driven by a charged capacitor system. We also investigate novel geometries for the insulator and electrodes to attempt to control the electric field profile. The initial ionization fraction of gas is explored computationally to gauge possible advantages of pre-ionization which could be created experimentally via lasers or a glow-discharge. Prepared by LLNL under Contract DE-AC52-07NA27344.

  15. Optimization of GATE and PHITS Monte Carlo code parameters for uniform scanning proton beam based on simulation with FLUKA general-purpose code

    NASA Astrophysics Data System (ADS)

    Kurosu, Keita; Takashina, Masaaki; Koizumi, Masahiko; Das, Indra J.; Moskvin, Vadim P.

    2014-10-01

    Although three general-purpose Monte Carlo (MC) simulation tools: Geant4, FLUKA and PHITS have been used extensively, differences in calculation results have been reported. The major causes are the implementation of the physical model, preset value of the ionization potential or definition of the maximum step size. In order to achieve artifact free MC simulation, an optimized parameters list for each simulation system is required. Several authors have already proposed the optimized lists, but those studies were performed with a simple system such as only a water phantom. Since particle beams have a transport, interaction and electromagnetic processes during beam delivery, establishment of an optimized parameters-list for whole beam delivery system is therefore of major importance. The purpose of this study was to determine the optimized parameters list for GATE and PHITS using proton treatment nozzle computational model. The simulation was performed with the broad scanning proton beam. The influences of the customizing parameters on the percentage depth dose (PDD) profile and the proton range were investigated by comparison with the result of FLUKA, and then the optimal parameters were determined. The PDD profile and the proton range obtained from our optimized parameters list showed different characteristics from the results obtained with simple system. This led to the conclusion that the physical model, particle transport mechanics and different geometry-based descriptions need accurate customization in planning computational experiments for artifact-free MC simulation.

  16. Optimization of Grit-Blasting Process Parameters for Production of Dense Coatings on Open Pores Metallic Foam Substrates Using Statistical Methods

    NASA Astrophysics Data System (ADS)

    Salavati, S.; Coyle, T. W.; Mostaghimi, J.

    2015-10-01

    Open pore metallic foam core sandwich panels prepared by thermal spraying of a coating on the foam structures can be used as high-efficiency heat transfer devices due to their high surface area to volume ratio. The structural, mechanical, and physical properties of thermally sprayed skins play a significant role in the performance of the related devices. These properties are mainly controlled by the porosity content, oxide content, adhesion strength, and stiffness of the deposited coating. In this study, the effects of grit-blasting process parameters on the characteristics of the temporary surface created on the metallic foam substrate and on the twin-wire arc-sprayed alloy 625 coating subsequently deposited on the foam were investigated through response surface methodology. Characterization of the prepared surface and sprayed coating was conducted by scanning electron microscopy, roughness measurements, and adhesion testing. Using statistical design of experiments, response surface method, a model was developed to predict the effect of grit-blasting parameters on the surface roughness of the prepared foam and also the porosity content of the sprayed coating. The coating porosity and adhesion strength were found to be determined by the substrate surface roughness, which could be controlled by grit-blasting parameters. Optimization of the grit-blasting parameters was conducted using the fitted model to minimize the porosity content of the coating while maintaining a high adhesion strength.

  17. Codes with Monotonic Codeword Lengths.

    ERIC Educational Resources Information Center

    Abrahams, Julia

    1994-01-01

    Discusses the minimum average codeword length coding under the constraint that the codewords are monotonically nondecreasing in length. Bounds on the average length of an optimal monotonic code are derived, and sufficient conditions are given such that algorithms for optimal alphabetic codes can be used to find the optimal monotonic code. (six…

  18. Optimized and secure technique for multiplexing QR code images of single characters: application to noiseless messages retrieval

    NASA Astrophysics Data System (ADS)

    Trejos, Sorayda; Fredy Barrera, John; Torroba, Roberto

    2015-08-01

    We present for the first time an optical encrypting-decrypting protocol for recovering messages without speckle noise. This is a digital holographic technique using a 2f scheme to process QR codes entries. In the procedure, letters used to compose eventual messages are individually converted into a QR code, and then each QR code is divided into portions. Through a holographic technique, we store each processed portion. After filtering and repositioning, we add all processed data to create a single pack, thus simplifying the handling and recovery of multiple QR code images, representing the first multiplexing procedure applied to processed QR codes. All QR codes are recovered in a single step and in the same plane, showing neither cross-talk nor noise problems as in other methods. Experiments have been conducted using an interferometric configuration and comparisons between unprocessed and recovered QR codes have been performed, showing differences between them due to the involved processing. Recovered QR codes can be successfully scanned, thanks to their noise tolerance. Finally, the appropriate sequence in the scanning of the recovered QR codes brings a noiseless retrieved message. Additionally, to procure maximum security, the multiplexed pack could be multiplied by a digital diffuser as to encrypt it. The encrypted pack is easily decoded by multiplying the multiplexing with the complex conjugate of the diffuser. As it is a digital operation, no noise is added. Therefore, this technique is threefold robust, involving multiplexing, encryption, and the need of a sequence to retrieve the outcome.

  19. FAST GYROSYNCHROTRON CODES

    SciTech Connect

    Fleishman, Gregory D.; Kuznetsov, Alexey A.

    2010-10-01

    Radiation produced by charged particles gyrating in a magnetic field is highly significant in the astrophysics context. Persistently increasing resolution of astrophysical observations calls for corresponding three-dimensional modeling of the radiation. However, available exact equations are prohibitively slow in computing a comprehensive table of high-resolution models required for many practical applications. To remedy this situation, we develop approximate gyrosynchrotron (GS) codes capable of quickly calculating the GS emission (in non-quantum regime) from both isotropic and anisotropic electron distributions in non-relativistic, mildly relativistic, and ultrarelativistic energy domains applicable throughout a broad range of source parameters including dense or tenuous plasmas and weak or strong magnetic fields. The computation time is reduced by several orders of magnitude compared with the exact GS algorithm. The new algorithm performance can gradually be adjusted to the user's needs depending on whether precision or computation speed is to be optimized for a given model. The codes are made available for users as a supplement to this paper.

  20. Dense high temperature ceramic oxide superconductors

    DOEpatents

    Landingham, Richard L.

    1993-01-01

    Dense superconducting ceramic oxide articles of manufacture and methods for producing these articles are described. Generally these articles are produced by first processing these superconducting oxides by ceramic processing techniques to optimize materials properties, followed by reestablishing the superconducting state in a desired portion of the ceramic oxide composite.

  1. Dense high temperature ceramic oxide superconductors

    DOEpatents

    Landingham, R.L.

    1993-10-12

    Dense superconducting ceramic oxide articles of manufacture and methods for producing these articles are described. Generally these articles are produced by first processing these superconducting oxides by ceramic processing techniques to optimize materials properties, followed by reestablishing the superconducting state in a desired portion of the ceramic oxide composite.

  2. User's manual for DELSOL2: a computer code for calculating the optical performance and optimal system design for solar-thermal central-receiver plants

    SciTech Connect

    Dellin, T.A.; Fish, M.J.; Yang, C.L.

    1981-08-01

    DELSOL2 is a revised and substantially extended version of the DELSOL computer program for calculating collector field performance and layout, and optimal system design for solar thermal central receiver plants. The code consists of a detailed model of the optical performance, a simpler model of the non-optical performance, an algorithm for field layout, and a searching algorithm to find the best system design. The latter two features are coupled to a cost model of central receiver components and an economic model for calculating energy costs. The code can handle flat, focused and/or canted heliostats, and external cylindrical, multi-aperture cavity, and flat plate receivers. The program optimizes the tower height, receiver size, field layout, heliostat spacings, and tower position at user specified power levels subject to flux limits on the receiver and land constraints for field layout. The advantages of speed and accuracy characteristic of Version I are maintained in DELSOL2.

  3. Memory-efficient table look-up optimized algorithm for context-based adaptive variable length decoding in H.264/advanced video coding

    NASA Astrophysics Data System (ADS)

    Wang, Jianhua; Cheng, Lianglun; Wang, Tao; Peng, Xiaodong

    2016-03-01

    Table look-up operation plays a very important role during the decoding processing of context-based adaptive variable length decoding (CAVLD) in H.264/advanced video coding (AVC). However, frequent table look-up operation can result in big table memory access, and then lead to high table power consumption. Aiming to solve the problem of big table memory access of current methods, and then reduce high power consumption, a memory-efficient table look-up optimized algorithm is presented for CAVLD. The contribution of this paper lies that index search technology is introduced to reduce big memory access for table look-up, and then reduce high table power consumption. Specifically, in our schemes, we use index search technology to reduce memory access by reducing the searching and matching operations for code_word on the basis of taking advantage of the internal relationship among length of zero in code_prefix, value of code_suffix and code_lengh, thus saving the power consumption of table look-up. The experimental results show that our proposed table look-up algorithm based on index search can lower about 60% memory access consumption compared with table look-up by sequential search scheme, and then save much power consumption for CAVLD in H.264/AVC.

  4. Computational electromagnetics and parallel dense matrix computations

    SciTech Connect

    Forsman, K.; Kettunen, L.; Gropp, W.; Levine, D.

    1995-06-01

    We present computational results using CORAL, a parallel, three-dimensional, nonlinear magnetostatic code based on a volume integral equation formulation. A key feature of CORAL is the ability to solve, in parallel, the large, dense systems of linear equations that are inherent in the use of integral equation methods. Using the Chameleon and PSLES libraries ensures portability and access to the latest linear algebra solution technology.

  5. Homological stabilizer codes

    SciTech Connect

    Anderson, Jonas T.

    2013-03-15

    In this paper we define homological stabilizer codes on qubits which encompass codes such as Kitaev's toric code and the topological color codes. These codes are defined solely by the graphs they reside on. This feature allows us to use properties of topological graph theory to determine the graphs which are suitable as homological stabilizer codes. We then show that all toric codes are equivalent to homological stabilizer codes on 4-valent graphs. We show that the topological color codes and toric codes correspond to two distinct classes of graphs. We define the notion of label set equivalencies and show that under a small set of constraints the only homological stabilizer codes without local logical operators are equivalent to Kitaev's toric code or to the topological color codes. - Highlights: Black-Right-Pointing-Pointer We show that Kitaev's toric codes are equivalent to homological stabilizer codes on 4-valent graphs. Black-Right-Pointing-Pointer We show that toric codes and color codes correspond to homological stabilizer codes on distinct graphs. Black-Right-Pointing-Pointer We find and classify all 2D homological stabilizer codes. Black-Right-Pointing-Pointer We find optimal codes among the homological stabilizer codes.

  6. Multi-Sensor Detection with Particle Swarm Optimization for Time-Frequency Coded Cooperative WSNs Based on MC-CDMA for Underground Coal Mines.

    PubMed

    Xu, Jingjing; Yang, Wei; Zhang, Linyuan; Han, Ruisong; Shao, Xiaotao

    2015-08-27

    In this paper, a wireless sensor network (WSN) technology adapted to underground channel conditions is developed, which has important theoretical and practical value for safety monitoring in underground coal mines. According to the characteristics that the space, time and frequency resources of underground tunnel are open, it is proposed to constitute wireless sensor nodes based on multicarrier code division multiple access (MC-CDMA) to make full use of these resources. To improve the wireless transmission performance of source sensor nodes, it is also proposed to utilize cooperative sensors with good channel conditions from the sink node to assist source sensors with poor channel conditions. Moreover, the total power of the source sensor and its cooperative sensors is allocated on the basis of their channel conditions to increase the energy efficiency of the WSN. To solve the problem that multiple access interference (MAI) arises when multiple source sensors transmit monitoring information simultaneously, a kind of multi-sensor detection (MSD) algorithm with particle swarm optimization (PSO), namely D-PSO, is proposed for the time-frequency coded cooperative MC-CDMA WSN. Simulation results show that the average bit error rate (BER) performance of the proposed WSN in an underground coal mine is improved significantly by using wireless sensor nodes based on MC-CDMA, adopting time-frequency coded cooperative transmission and D-PSO algorithm with particle swarm optimization.

  7. Multi-Sensor Detection with Particle Swarm Optimization for Time-Frequency Coded Cooperative WSNs Based on MC-CDMA for Underground Coal Mines

    PubMed Central

    Xu, Jingjing; Yang, Wei; Zhang, Linyuan; Han, Ruisong; Shao, Xiaotao

    2015-01-01

    In this paper, a wireless sensor network (WSN) technology adapted to underground channel conditions is developed, which has important theoretical and practical value for safety monitoring in underground coal mines. According to the characteristics that the space, time and frequency resources of underground tunnel are open, it is proposed to constitute wireless sensor nodes based on multicarrier code division multiple access (MC-CDMA) to make full use of these resources. To improve the wireless transmission performance of source sensor nodes, it is also proposed to utilize cooperative sensors with good channel conditions from the sink node to assist source sensors with poor channel conditions. Moreover, the total power of the source sensor and its cooperative sensors is allocated on the basis of their channel conditions to increase the energy efficiency of the WSN. To solve the problem that multiple access interference (MAI) arises when multiple source sensors transmit monitoring information simultaneously, a kind of multi-sensor detection (MSD) algorithm with particle swarm optimization (PSO), namely D-PSO, is proposed for the time-frequency coded cooperative MC-CDMA WSN. Simulation results show that the average bit error rate (BER) performance of the proposed WSN in an underground coal mine is improved significantly by using wireless sensor nodes based on MC-CDMA, adopting time-frequency coded cooperative transmission and D-PSO algorithm with particle swarm optimization. PMID:26343660

  8. Multi-Sensor Detection with Particle Swarm Optimization for Time-Frequency Coded Cooperative WSNs Based on MC-CDMA for Underground Coal Mines.

    PubMed

    Xu, Jingjing; Yang, Wei; Zhang, Linyuan; Han, Ruisong; Shao, Xiaotao

    2015-01-01

    In this paper, a wireless sensor network (WSN) technology adapted to underground channel conditions is developed, which has important theoretical and practical value for safety monitoring in underground coal mines. According to the characteristics that the space, time and frequency resources of underground tunnel are open, it is proposed to constitute wireless sensor nodes based on multicarrier code division multiple access (MC-CDMA) to make full use of these resources. To improve the wireless transmission performance of source sensor nodes, it is also proposed to utilize cooperative sensors with good channel conditions from the sink node to assist source sensors with poor channel conditions. Moreover, the total power of the source sensor and its cooperative sensors is allocated on the basis of their channel conditions to increase the energy efficiency of the WSN. To solve the problem that multiple access interference (MAI) arises when multiple source sensors transmit monitoring information simultaneously, a kind of multi-sensor detection (MSD) algorithm with particle swarm optimization (PSO), namely D-PSO, is proposed for the time-frequency coded cooperative MC-CDMA WSN. Simulation results show that the average bit error rate (BER) performance of the proposed WSN in an underground coal mine is improved significantly by using wireless sensor nodes based on MC-CDMA, adopting time-frequency coded cooperative transmission and D-PSO algorithm with particle swarm optimization. PMID:26343660

  9. Brain-Generated Estradiol Drives Long-Term Optimization of Auditory Coding to Enhance the Discrimination of Communication Signals

    PubMed Central

    Tremere, Liisa A.; Pinaud, Raphael

    2011-01-01

    Auditory processing and hearing-related pathologies are heavily influenced by steroid hormones in a variety of vertebrate species including humans. The hormone estradiol has been recently shown to directly modulate the gain of central auditory neurons, in real-time, by controlling the strength of inhibitory transmission via a non-genomic mechanism. The functional relevance of this modulation, however, remains unknown. Here we show that estradiol generated in the songbird homologue of the mammalian auditory association cortex, rapidly enhances the effectiveness of the neural coding of complex, learned acoustic signals in awake zebra finches. Specifically, estradiol increases mutual information rates, coding efficiency and the neural discrimination of songs. These effects are mediated by estradiol’s modulation of both rate and temporal coding of auditory signals. Interference with the local action or production of estradiol in the auditory forebrain of freely-behaving animals disrupts behavioral responses to songs, but not to other behaviorally-relevant communication signals. Our findings directly show that estradiol is a key regulator of auditory function in the adult vertebrate brain. PMID:21368039

  10. Optimism

    PubMed Central

    Carver, Charles S.; Scheier, Michael F.; Segerstrom, Suzanne C.

    2010-01-01

    Optimism is an individual difference variable that reflects the extent to which people hold generalized favorable expectancies for their future. Higher levels of optimism have been related prospectively to better subjective well-being in times of adversity or difficulty (i.e., controlling for previous well-being). Consistent with such findings, optimism has been linked to higher levels of engagement coping and lower levels of avoidance, or disengagement, coping. There is evidence that optimism is associated with taking proactive steps to protect one's health, whereas pessimism is associated with health-damaging behaviors. Consistent with such findings, optimism is also related to indicators of better physical health. The energetic, task-focused approach that optimists take to goals also relates to benefits in the socioeconomic world. Some evidence suggests that optimism relates to more persistence in educational efforts and to higher later income. Optimists also appear to fare better than pessimists in relationships. Although there are instances in which optimism fails to convey an advantage, and instances in which it may convey a disadvantage, those instances are relatively rare. In sum, the behavioral patterns of optimists appear to provide models of living for others to learn from. PMID:20170998

  11. Dense Axion Stars

    NASA Astrophysics Data System (ADS)

    Mohapatra, Abhishek; Braaten, Eric; Zhang, Hong

    2016-03-01

    If the dark matter consists of axions, gravity can cause them to coalesce into axion stars, which are stable gravitationally bound Bose-Einstein condensates of axions. In the previously known axion stars, gravity and the attractive force between pairs of axions are balanced by the kinetic pressure. If the axion mass energy is mc2 =10-4 eV, these dilute axion stars have a maximum mass of about 10-14M⊙ . We point out that there are also dense axion stars in which gravity is balanced by the mean-field pressure of the axion condensate. We study axion stars using the leading term in a systematically improvable approximation to the effective potential of the nonrelativistic effective field theory for axions. Using the Thomas-Fermi approximation in which the kinetic pressure is neglected, we find a sequence of new branches of axion stars in which gravity is balanced by the mean-field interaction energy of the axion condensate. If mc2 =10-4 4 eV, the first branch of these dense axion stars has mass ranging from about 10-11M⊙ toabout M⊙.

  12. Warm dense crystallography

    NASA Astrophysics Data System (ADS)

    Valenza, Ryan A.; Seidler, Gerald T.

    2016-03-01

    The intense femtosecond-scale pulses from x-ray free electron lasers (XFELs) are able to create and interrogate interesting states of matter characterized by long-lived nonequilibrium semicore or core electron occupancies or by the heating of dense phases via the relaxation cascade initiated by the photoelectric effect. We address here the latter case of "warm dense matter" (WDM) and investigate the observable consequences of x-ray heating of the electronic degrees of freedom in crystalline systems. We report temperature-dependent density functional theory calculations for the x-ray diffraction from crystalline LiF, graphite, diamond, and Be. We find testable, strong signatures of condensed-phase effects that emphasize the importance of wide-angle scattering to study nonequilibrium states. These results also suggest that the reorganization of the valence electron density at eV-scale temperatures presents a confounding factor to achieving atomic resolution in macromolecular serial femtosecond crystallography (SFX) studies at XFELs, as performed under the "diffract before destroy" paradigm.

  13. Validation of a computer code for analysis of subsonic aerodynamic performance of wings with flaps in combination with a canard or horizontal tail and an application to optimization

    NASA Technical Reports Server (NTRS)

    Carlson, Harry W.; Darden, Christine M.; Mann, Michael J.

    1990-01-01

    Extensive correlations of computer code results with experimental data are employed to illustrate the use of a linearized theory, attached flow method for the estimation and optimization of the longitudinal aerodynamic performance of wing-canard and wing-horizontal tail configurations which may employ simple hinged flap systems. Use of an attached flow method is based on the premise that high levels of aerodynamic efficiency require a flow that is as nearly attached as circumstances permit. The results indicate that linearized theory, attached flow, computer code methods (modified to include estimated attainable leading-edge thrust and an approximate representation of vortex forces) provide a rational basis for the estimation and optimization of aerodynamic performance at subsonic speeds below the drag rise Mach number. Generally, good prediction of aerodynamic performance, as measured by the suction parameter, can be expected for near optimum combinations of canard or horizontal tail incidence and leading- and trailing-edge flap deflections at a given lift coefficient (conditions which tend to produce a predominantly attached flow).

  14. Validation of a pair of computer codes for estimation and optimization of subsonic aerodynamic performance of simple hinged-flap systems for thin swept wings

    NASA Technical Reports Server (NTRS)

    Carlson, Harry W.; Darden, Christine M.

    1988-01-01

    Extensive correlations of computer code results with experimental data are employed to illustrate the use of linearized theory attached flow methods for the estimation and optimization of the aerodynamic performance of simple hinged flap systems. Use of attached flow methods is based on the premise that high levels of aerodynamic efficiency require a flow that is as nearly attached as circumstances permit. A variety of swept wing configurations are considered ranging from fighters to supersonic transports, all with leading- and trailing-edge flaps for enhancement of subsonic aerodynamic efficiency. The results indicate that linearized theory attached flow computer code methods provide a rational basis for the estimation and optimization of flap system aerodynamic performance at subsonic speeds. The analysis also indicates that vortex flap design is not an opposing approach but is closely related to attached flow design concepts. The successful vortex flap design actually suppresses the formation of detached vortices to produce a small vortex which is restricted almost entirely to the leading edge flap itself.

  15. Optimization and Parallelization of the Thermal-Hydraulic Sub-channel Code CTF for High-Fidelity Multi-physics Applications

    SciTech Connect

    Salko, Robert K; Schmidt, Rodney; Avramova, Maria N

    2014-01-01

    This paper describes major improvements to the computational infrastructure of the CTF sub-channel code so that full-core sub-channel-resolved simulations can now be performed in much shorter run-times, either in stand-alone mode or as part of coupled-code multi-physics calculations. These improvements support the goals of the Department Of Energy (DOE) Consortium for Advanced Simulations of Light Water (CASL) Energy Innovation Hub to develop high fidelity multi-physics simulation tools for nuclear energy design and analysis. A set of serial code optimizations--including fixing computational inefficiencies, optimizing the numerical approach, and making smarter data storage choices--are first described and shown to reduce both execution time and memory usage by about a factor of ten. Next, a Single Program Multiple Data (SPMD) parallelization strategy targeting distributed memory Multiple Instruction Multiple Data (MIMD) platforms and utilizing domain-decomposition is presented. In this approach, data communication between processors is accomplished by inserting standard MPI calls at strategic points in the code. The domain decomposition approach implemented assigns one MPI process to each fuel assembly, with each domain being represented by its own CTF input file. The creation of CTF input files, both for serial and parallel runs, is also fully automated through use of a pre-processor utility that takes a greatly reduced set of user input over the traditional CTF input file. To run CTF in parallel, two additional libraries are currently needed; MPI, for inter-processor message passing, and the Parallel Extensible Toolkit for Scientific Computation (PETSc), which is leveraged to solve the global pressure matrix in parallel. Results presented include a set of testing and verification calculations and performance tests assessing parallel scaling characteristics up to a full core, sub-channel-resolved model of Watts Bar Unit 1 under hot full-power conditions (193 17x17

  16. Theory and Simulation of Warm Dense Matter Targets

    SciTech Connect

    Barnard, J J; Armijo, J; More, R M; Friedman, A; Kaganovich, I; Logan, B G; Marinak, M M; Penn, G E; Sefkow, A B; Santhanam, P; Wurtele, J S

    2006-07-13

    We present simulations and analysis of the heating of warm dense matter foils by ion beams with ion energy less than one MeV per nucleon to target temperatures of order one eV. Simulations were carried out using the multi-physics radiation hydrodynamics code HYDRA and comparisons are made with analysis and the code DPC. We simulate possible targets for a proposed experiment at LBNL (the so-called Neutralized Drift Compression Experiment, NDCXII) for studies of warm dense matter. We compare the dynamics of ideally heated targets, under several assumed equation of states, exploring dynamics in the two-phase (fluid-vapor) regime.

  17. The characterization and optimization of NIO1 ion source extraction aperture using a 3D particle-in-cell code.

    PubMed

    Taccogna, F; Minelli, P; Cavenago, M; Veltri, P; Ippolito, N

    2016-02-01

    The geometry of a single aperture in the extraction grid plays a relevant role for the optimization of negative ion transport and extraction probability in a hybrid negative ion source. For this reason, a three-dimensional particle-in-cell/Monte Carlo collision model of the extraction region around the single aperture including part of the source and part of the acceleration (up to the extraction grid (EG) middle) regions has been developed for the new aperture design prepared for negative ion optimization 1 source. Results have shown that the dimension of the flat and chamfered parts and the slope of the latter in front of the source region maximize the product of production rate and extraction probability (allowing the best EG field penetration) of surface-produced negative ions. The negative ion density in the plane yz has been reported. PMID:26932027

  18. BUMPERII - DESIGN ANALYSIS CODE FOR OPTIMIZING SPACECRAFT SHIELDING AND WALL CONFIGURATION FOR ORBITAL DEBRIS AND METEOROID IMPACTS

    NASA Technical Reports Server (NTRS)

    Hill, S. A.

    1994-01-01

    BUMPERII is a modular program package employing a numerical solution technique to calculate a spacecraft's probability of no penetration (PNP) from man-made orbital debris or meteoroid impacts. The solution equation used to calculate the PNP is based on the Poisson distribution model for similar analysis of smaller craft, but reflects the more rigorous mathematical modeling of spacecraft geometry, orientation, and impact characteristics necessary for treatment of larger structures such as space station components. The technique considers the spacecraft surface in terms of a series of flat plate elements. It divides the threat environment into a number of finite cases, then evaluates each element of each threat. The code allows for impact shielding (shadowing) of one element by another in various configurations over the spacecraft exterior, and also allows for the effects of changing spacecraft flight orientation and attitude. Four main modules comprise the overall BUMPERII package: GEOMETRY, RESPONSE, SHIELD, and CONTOUR. The GEOMETRY module accepts user-generated finite element model (FEM) representations of the spacecraft geometry and creates geometry databases for both meteoroid and debris analysis. The GEOMETRY module expects input to be in either SUPERTAB Universal File Format or PATRAN Neutral File Format. The RESPONSE module creates wall penetration response databases, one for meteoroid analysis and one for debris analysis, for up to 100 unique wall configurations. This module also creates a file containing critical diameter as a function of impact velocity and impact angle for each wall configuration. The SHIELD module calculates the PNP for the modeled structure given exposure time, operating altitude, element ID ranges, and the data from the RESPONSE and GEOMETRY databases. The results appear in a summary file. SHIELD will also determine the effective area of the components and the overall model, and it can produce a data file containing the probability

  19. Geometrical Optics of Dense Aerosols

    SciTech Connect

    Hay, Michael J.; Valeo, Ernest J.; Fisch, Nathaniel J.

    2013-04-24

    Assembling a free-standing, sharp-edged slab of homogeneous material that is much denser than gas, but much more rare ed than a solid, is an outstanding technological challenge. The solution may lie in focusing a dense aerosol to assume this geometry. However, whereas the geometrical optics of dilute aerosols is a well-developed fi eld, the dense aerosol limit is mostly unexplored. Yet controlling the geometrical optics of dense aerosols is necessary in preparing such a material slab. Focusing dense aerosols is shown here to be possible, but the nite particle density reduces the eff ective Stokes number of the flow, a critical result for controlled focusing. __________________________________________________

  20. Optimal size of stochastic Hodgkin-Huxley neuronal systems for maximal energy efficiency in coding pulse signals.

    PubMed

    Yu, Lianchun; Liu, Liwei

    2014-03-01

    The generation and conduction of action potentials (APs) represents a fundamental means of communication in the nervous system and is a metabolically expensive process. In this paper, we investigate the energy efficiency of neural systems in transferring pulse signals with APs. By analytically solving a bistable neuron model that mimics the AP generation with a particle crossing the barrier of a double well, we find the optimal number of ion channels that maximizes the energy efficiency of a neuron. We also investigate the energy efficiency of a neuron population in which the input pulse signals are represented with synchronized spikes and read out with a downstream coincidence detector neuron. We find an optimal number of neurons in neuron population, as well as the number of ion channels in each neuron that maximizes the energy efficiency. The energy efficiency also depends on the characters of the input signals, e.g., the pulse strength and the interpulse intervals. These results are confirmed by computer simulation of the stochastic Hodgkin-Huxley model with a detailed description of the ion channel random gating. We argue that the tradeoff between signal transmission reliability and energy cost may influence the size of the neural systems when energy use is constrained.

  1. A highly optimized code for calculating atomic data at neutron star magnetic field strengths using a doubly self-consistent Hartree-Fock-Roothaan method

    NASA Astrophysics Data System (ADS)

    Schimeczek, C.; Engel, D.; Wunner, G.

    2014-05-01

    account the shielding of the core potential for outer electrons by inner electrons, and an optimal finite-element decomposition of each individual longitudinal wave function. These measures largely enhance the convergence properties compared to the previous code and lead to speed-ups by factors up to two orders of magnitude compared with the implementation of the Hartree-Fock-Roothaan method used by Engel and Wunner in [D. Engel, G. Wunner, Phys. Rev. A 78, 032515 (2008)].

  2. Dense Hypervelocity Plasma Jets

    NASA Astrophysics Data System (ADS)

    Case, Andrew; Witherspoon, F. Douglas; Messer, Sarah; Bomgardner, Richard; Phillips, Michael; van Doren, David; Elton, Raymond; Uzun-Kaymak, Ilker

    2007-11-01

    We are developing high velocity dense plasma jets for fusion and HEDP applications. Traditional coaxial plasma accelerators suffer from the blow-by instability which limits the mass accelerated to high velocity. In the current design blow-by is delayed by a combination of electrode shaping and use of a tailored plasma armature created by injection of a high density plasma at a few eV generated by arrays of capillary discharges or sparkgaps. Experimental data will be presented for a complete 32 injector gun system built for driving rotation in the Maryland MCX experiment, including data on penetration of the plasma jet through a magnetic field. We present spectroscopic measurements of plasma velocity, temperature, and density, as well as total momentum measured using a ballistic pendulum. Measurements are in agreement with each other and with time of flight data from photodiodes and a multichannel PMT. Plasma density is above 10^15 cm-3, velocities range up to about 100 km/s. Preliminary results from a quadrature heterodyne HeNe interferometer are consistent with these results.

  3. Ariel's Densely Pitted Surface

    NASA Technical Reports Server (NTRS)

    1986-01-01

    This mosaic of the four highest-resolution images of Ariel represents the most detailed Voyager 2 picture of this satellite of Uranus. The images were taken through the clear filter of Voyager's narrow-angle camera on Jan. 24, 1986, at a distance of about 130,000 kilometers (80,000 miles). Ariel is about 1,200 km (750 mi) in diameter; the resolution here is 2.4 km (1.5 mi). Much of Ariel's surface is densely pitted with craters 5 to 10 km (3 to 6 mi) across. These craters are close to the threshold of detection in this picture. Numerous valleys and fault scarps crisscross the highly pitted terrain. Voyager scientists believe the valleys have formed over down-dropped fault blocks (graben); apparently, extensive faulting has occurred as a result of expansion and stretching of Ariel's crust. The largest fault valleys, near the terminator at right, as well as a smooth region near the center of this image, have been partly filled with deposits that are younger and less heavily cratered than the pitted terrain. Narrow, somewhat sinuous scarps and valleys have been formed, in turn, in these young deposits. It is not yet clear whether these sinuous features have been formed by faulting or by the flow of fluids.

    JPL manages the Voyager project for NASA's Office of Space Science.

  4. Data-driven input variable selection for rainfall-runoff modeling using binary-coded particle swarm optimization and Extreme Learning Machines

    NASA Astrophysics Data System (ADS)

    Taormina, Riccardo; Chau, Kwok-Wing

    2015-10-01

    Selecting an adequate set of inputs is a critical step for successful data-driven streamflow prediction. In this study, we present a novel approach for Input Variable Selection (IVS) that employs Binary-coded discrete Fully Informed Particle Swarm optimization (BFIPS) and Extreme Learning Machines (ELM) to develop fast and accurate IVS algorithms. A scheme is employed to encode the subset of selected inputs and ELM specifications into the binary particles, which are evolved using single objective and multi-objective BFIPS optimization (MBFIPS). The performances of these ELM-based methods are assessed using the evaluation criteria and the datasets included in the comprehensive IVS evaluation framework proposed by Galelli et al. (2014). From a comparison with 4 major IVS techniques used in their original study it emerges that the proposed methods compare very well in terms of selection accuracy. The best performers were found to be (1) a MBFIPS-ELM algorithm based on the concurrent minimization of an error function and the number of selected inputs, and (2) a BFIPS-ELM algorithm based on the minimization of a variant of the Akaike Information Criterion (AIC). The first technique is arguably the most accurate overall, and is able to reach an almost perfect specification of the optimal input subset for a partially synthetic rainfall-runoff experiment devised for the Kentucky River basin. In addition, MBFIPS-ELM allows for the determination of the relative importance of the selected inputs. On the other hand, the BFIPS-ELM is found to consistently reach high accuracy scores while being considerably faster. By extrapolating the results obtained on the IVS test-bed, it can be concluded that the proposed techniques are particularly suited for rainfall-runoff modeling applications characterized by high nonlinearity in the catchment dynamics.

  5. Dense energetic nitraminofurazanes.

    PubMed

    Fischer, Dennis; Klapötke, Thomas M; Reymann, Marius; Stierstorfer, Jörg

    2014-05-19

    3,3'-Diamino-4,4'-bifurazane (1), 3,3'-diaminoazo-4,4'-furazane (2), and 3,3'-diaminoazoxy-4,4'-furazane (3) were nitrated in 100 % HNO3 to give corresponding 3,3'-dinitramino-4,4'-bifurazane (4), 3,3'-dinitramino-4,4'-azofurazane (5) and 3,3'-dinitramino-4,4'-azoxyfurazane (6), respectively. The neutral compounds show very imposing explosive performance but possess lower thermal stability and higher sensitivity than hexogen (RDX). More than 40 nitrogen-rich compounds and metal salts were prepared. Most compounds were characterized by low-temperature X-ray diffraction, all of them by infrared and Raman spectroscopy, multinuclear NMR spectroscopy, elemental analysis, and by differential scanning calorimetry (DSC). Calculated energetic performances using the EXPLO5 code based on calculated (CBS-4M) heats of formation and X-ray densities support the high energetic performances of the nitraminofurazanes as energetic materials. The sensitivities towards impact, friction, and electrostatic discharge were also explored. Additionally the general toxicity of the anions against vibrio fischeri, representative for an aquatic microorganism, was determined.

  6. Analysis of dense particulate flow dynamics using a Euler-Lagrange approach

    NASA Astrophysics Data System (ADS)

    Desjardins, Olivier; Pepiot, Perrine

    2009-11-01

    Thermochemical conversion of biomass to biofuels relies heavily on dense particulate flows to enhance heat and mass transfers. While CFD tools can provide very valuable insights on reactor design and optimization, accurate simulations of these flows remain extremely challenging due to the complex coupling between the gas and solid phases. In this work, Lagrangian particle tracking has been implemented in the arbitrarily high order parallel LES/DNS code NGA [Desjardins et al., JCP, 2008]. Collisions are handled using a soft-sphere model, while a combined least squares/mollification approach is adopted to accurately transfer data between the Lagrangian particles and the Eulerian gas phase mesh, regardless of the particle diameter to mesh size ratio. The energy conservation properties of the numerical scheme are assessed and a detailed statistical analysis of the dynamics of a periodic fluidized bed with a uniform velocity inlet is conducted.

  7. A highly optimized code for calculating atomic data at neutron star magnetic field strengths using a doubly self-consistent Hartree-Fock-Roothaan method

    NASA Astrophysics Data System (ADS)

    Schimeczek, C.; Engel, D.; Wunner, G.

    2012-07-01

    account the shielding of the core potential for outer electrons by inner electrons, and an optimal finite-element decomposition of each individual longitudinal wave function. These measures largely enhance the convergence properties compared to the previous code, and lead to speed-ups by factors up to two orders of magnitude compared with the implementation of the Hartree-Fock-Roothaan method used by Engel and Wunner in [D. Engel, G. Wunner, Phys. Rev. A 78 (2008) 032515]. New version program summaryProgram title: HFFER II Catalogue identifier: AECC_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECC_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: v 55 130 No. of bytes in distributed program, including test data, etc.: 293 700 Distribution format: tar.gz Programming language: Fortran 95 Computer: Cluster of 1-13 HP Compaq dc5750 Operating system: Linux Has the code been vectorized or parallelized?: Yes, parallelized using MPI directives. RAM: 1 GByte per node Classification: 2.1 External routines: MPI/GFortran, LAPACK, BLAS, FMlib (included in the package) Catalogue identifier of previous version: AECC_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 302 Does the new version supersede the previous version?: Yes Nature of problem: Quantitative modellings of features observed in the X-ray spectra of isolated magnetic neutron stars are hampered by the lack of sufficiently large and accurate databases for atoms and ions up to the last fusion product, iron, at strong magnetic field strengths. Our code is intended to provide a powerful tool for calculating energies and oscillator strengths of medium-Z atoms and ions at neutron star magnetic field strengths with sufficient accuracy in a routine way to create such databases. Solution method: The

  8. Legacy Code Modernization

    NASA Technical Reports Server (NTRS)

    Hribar, Michelle R.; Frumkin, Michael; Jin, Haoqiang; Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    Over the past decade, high performance computing has evolved rapidly; systems based on commodity microprocessors have been introduced in quick succession from at least seven vendors/families. Porting codes to every new architecture is a difficult problem; in particular, here at NASA, there are many large CFD applications that are very costly to port to new machines by hand. The LCM ("Legacy Code Modernization") Project is the development of an integrated parallelization environment (IPE) which performs the automated mapping of legacy CFD (Fortran) applications to state-of-the-art high performance computers. While most projects to port codes focus on the parallelization of the code, we consider porting to be an iterative process consisting of several steps: 1) code cleanup, 2) serial optimization,3) parallelization, 4) performance monitoring and visualization, 5) intelligent tools for automated tuning using performance prediction and 6) machine specific optimization. The approach for building this parallelization environment is to build the components for each of the steps simultaneously and then integrate them together. The demonstration will exhibit our latest research in building this environment: 1. Parallelizing tools and compiler evaluation. 2. Code cleanup and serial optimization using automated scripts 3. Development of a code generator for performance prediction 4. Automated partitioning 5. Automated insertion of directives. These demonstrations will exhibit the effectiveness of an automated approach for all the steps involved with porting and tuning a legacy code application for a new architecture.

  9. Neutron Emission in Deuterium Dense Plasma Foci

    NASA Astrophysics Data System (ADS)

    Appelbe, Brian; Chittenden, Jeremy

    2013-10-01

    We present the results of a computational study of the deuterium dense plasma focus (DPF) carried out to improve understanding of the neutron production mechanism in the DPF. The device currents studied range from 70 kA to several MA. The complete evolution of the DPF is simulated in 3D from rundown through to neutron emission using a hybrid computational method. The rundown, pinching, stagnation and post-stagnation (pinch break-up) phases are simulated using the 3D MHD code Gorgon. Kinetic computational tools are used to model the formation and transport of non-thermal ion populations and neutron production during the stagnation and post-stagnation phases, resulting in the production of synthetic neutron spectra. It is observed that the break-up phase plays an important role in the formation of non-thermal ions. Large electric fields generated during pinch break-up cause ions to be accelerated from the edges of dense plasma regions. The dependence on current of the neutron yield, neutron spectra shape and isotropy is studied. The effect of magnetization of the non-thermal ions is evident as the anisotropy of the neutron spectra decreases at higher current.

  10. Dense LU Factorization on Multicore Supercomputer Nodes

    SciTech Connect

    Lifflander, Jonathan; Miller, Phil; Venkataraman, Ramprasad; Arya, Anshu; Jones, Terry R; Kale, Laxmikant V

    2012-01-01

    Dense LU factorization is a prominent benchmark used to rank the performance of supercomputers. Many implementations, including the reference code HPL, use block-cyclic distributions of matrix blocks onto a two-dimensional process grid. The process grid dimensions drive a trade-off between communication and computation and are architecture- and implementation-sensitive. We show how the critical panel factorization steps can be made less communication-bound by overlapping asynchronous collectives for pivot identification and exchange with the computation of rank-k updates. By shifting this trade-off, a modified block-cyclic distribution can beneficially exploit more available parallelism on the critical path, and reduce panel factorization's memory hierarchy contention on now-ubiquitous multi-core architectures. The missed parallelism in traditional block-cyclic distributions arises because active panel factorization, triangular solves, and subsequent broadcasts are spread over single process columns or rows (respectively) of the process grid. Increasing one dimension of the process grid decreases the number of distinct processes in the other dimension. To increase parallelism in both dimensions, periodic 'rotation' is applied to the process grid to recover the row-parallelism lost by a tall process grid. During active panel factorization, rank-1 updates stream through memory with minimal reuse. In a column-major process grid, the performance of this access pattern degrades as too many streaming processors contend for access to memory. A block-cyclic mapping in the more popular row-major order does not encounter this problem, but consequently sacrifices node and network locality in the critical pivoting steps. We introduce 'striding' to vary between the two extremes of row- and column-major process grids. As a test-bed for further mapping experiments, we describe a dense LU implementation that allows a block distribution to be defined as a general function of block

  11. New quantum MDS-convolutional codes derived from constacyclic codes

    NASA Astrophysics Data System (ADS)

    Li, Fengwei; Yue, Qin

    2015-12-01

    In this paper, we utilize a family of Hermitian dual-containing constacyclic codes to construct classical and quantum MDS convolutional codes. Our classical and quantum convolutional codes are optimal in the sense that they attain the classical (quantum) generalized Singleton bound.

  12. A look at scalable dense linear algebra libraries

    SciTech Connect

    Dongarra, J.J. |; van de Geijn, R.; Walker, D.W.

    1992-07-01

    We discuss the essential design features of a library of scalable software for performing dense linear algebra computations on distributed memory concurrent computers. The square block scattered decomposition is proposed as a flexible and general-purpose way of decomposing most, if not all, dense matrix problems. An object- oriented interface to the library permits more portable applications to be written, and is easy to learn and use, since details of the parallel implementation are hidden from the user. Experiments on the Intel Touchstone Delta system with a prototype code that uses the square block scattered decomposition to perform LU factorization are presented and analyzed. It was found that the code was both scalable and efficient, performing at about 14 Gflop/s (double precision) for the largest problem considered.

  13. SU-E-T-590: Optimizing Magnetic Field Strengths with Matlab for An Ion-Optic System in Particle Therapy Consisting of Two Quadrupole Magnets for Subsequent Simulations with the Monte-Carlo Code FLUKA

    SciTech Connect

    Baumann, K; Weber, U; Simeonov, Y; Zink, K

    2015-06-15

    Purpose: Aim of this study was to optimize the magnetic field strengths of two quadrupole magnets in a particle therapy facility in order to obtain a beam quality suitable for spot beam scanning. Methods: The particle transport through an ion-optic system of a particle therapy facility consisting of the beam tube, two quadrupole magnets and a beam monitor system was calculated with the help of Matlab by using matrices that solve the equation of motion of a charged particle in a magnetic field and field-free region, respectively. The magnetic field strengths were optimized in order to obtain a circular and thin beam spot at the iso-center of the therapy facility. These optimized field strengths were subsequently transferred to the Monte-Carlo code FLUKA and the transport of 80 MeV/u C12-ions through this ion-optic system was calculated by using a user-routine to implement magnetic fields. The fluence along the beam-axis and at the iso-center was evaluated. Results: The magnetic field strengths could be optimized by using Matlab and transferred to the Monte-Carlo code FLUKA. The implementation via a user-routine was successful. Analyzing the fluence-pattern along the beam-axis the characteristic focusing and de-focusing effects of the quadrupole magnets could be reproduced. Furthermore the beam spot at the iso-center was circular and significantly thinner compared to an unfocused beam. Conclusion: In this study a Matlab tool was developed to optimize magnetic field strengths for an ion-optic system consisting of two quadrupole magnets as part of a particle therapy facility. These magnetic field strengths could subsequently be transferred to and implemented in the Monte-Carlo code FLUKA to simulate the particle transport through this optimized ion-optic system.

  14. Validation of spatiotemporally dense springtime land surface phenology with intensive and upscale in situ

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Land surface phenology (LSP) developed using temporally and spatially optimized remote sensing data, is particularly promising for use in detailed ecosystem monitoring and modeling efforts. Validating spatiotemporally dense LSP using compatible (intensively collected) in situ phenological data is t...

  15. Warm Dense Matter: An Overview

    SciTech Connect

    Kalantar, D H; Lee, R W; Molitoris, J D

    2004-04-21

    This document provides a summary of the ''LLNL Workshop on Extreme States of Materials: Warm Dense Matter to NIF'' which was held on 20, 21, and 22 February 2002 at the Wente Conference Center in Livermore, CA. The warm dense matter regime, the transitional phase space region between cold material and hot plasma, is presently poorly understood. The drive to understand the nature of matter in this regime is sparking scientific activity worldwide. In addition to pure scientific interest, finite temperature dense matter occurs in the regimes of interest to the SSMP (Stockpile Stewardship Materials Program). So that obtaining a better understanding of WDM is important to performing effective experiments at, e.g., NIF, a primary mission of LLNL. At this workshop we examined current experimental and theoretical work performed at, and in conjunction with, LLNL to focus future activities and define our role in this rapidly emerging research area. On the experimental front LLNL plays a leading role in three of the five relevant areas and has the opportunity to become a major player in the other two. Discussion at the workshop indicated that the path forward for the experimental efforts at LLNL were two fold: First, we are doing reasonable baseline work at SPLs, HE, and High Energy Lasers with more effort encouraged. Second, we need to plan effectively for the next evolution in large scale facilities, both laser (NIF) and Light/Beam sources (LCLS/TESLA and GSI) Theoretically, LLNL has major research advantages in areas as diverse as the thermochemical approach to warm dense matter equations of state to first principles molecular dynamics simulations. However, it was clear that there is much work to be done theoretically to understand warm dense matter. Further, there is a need for a close collaboration between the generation of verifiable experimental data that can provide benchmarks of both the experimental techniques and the theoretical capabilities. The conclusion of this

  16. Transonic aerodynamics of dense gases. M.S. Thesis - Virginia Polytechnic Inst. and State Univ., Apr. 1990

    NASA Technical Reports Server (NTRS)

    Morren, Sybil Huang

    1991-01-01

    Transonic flow of dense gases for two-dimensional, steady-state, flow over a NACA 0012 airfoil was predicted analytically. The computer code used to model the dense gas behavior was a modified version of Jameson's FL052 airfoil code. The modifications to the code enabled modeling the dense gas behavior near the saturated vapor curve and critical pressure region where the fundamental derivative, Gamma, is negative. This negative Gamma region is of interest because the nonclassical gas behavior such as formation and propagation of expansion shocks, and the disintegration of inadmissible compression shocks may exist. The results indicated that dense gases with undisturbed thermodynamic states in the negative Gamma region show a significant reduction in the extent of the transonic regime as compared to that predicted by the perfect gas theory. The results support existing theories and predictions of the nonclassical, dense gas behavior from previous investigations.

  17. Dense, finely, grained composite materials

    DOEpatents

    Dunmead, Stephen D.; Holt, Joseph B.; Kingman, Donald D.; Munir, Zuhair A.

    1990-01-01

    Dense, finely grained composite materials comprising one or more ceramic phase or phase and one or more metallic and/or intermetallic phase or phases are produced by combustion synthesis. Spherical ceramic grains are homogeneously dispersed within the matrix. Methods are provided, which include the step of applying mechanical pressure during or immediately after ignition, by which the microstructures in the resulting composites can be controllably selected.

  18. An efficient fully atomistic potential model for dense fluid methane

    NASA Astrophysics Data System (ADS)

    Jiang, Chuntao; Ouyang, Jie; Zhuang, Xin; Wang, Lihua; Li, Wuming

    2016-08-01

    A fully atomistic model aimed to obtain a general purpose model for the dense fluid methane is presented. The new optimized potential for liquid simulation (OPLS) model is a rigid five site model which consists of five fixed point charges and five Lennard-Jones centers. The parameters in the potential model are determined by a fit of the experimental data of dense fluid methane using molecular dynamics simulation. The radial distribution function and the diffusion coefficient are successfully calculated for dense fluid methane at various state points. The simulated results are in good agreement with the available experimental data shown in literature. Moreover, the distribution of mean number hydrogen bonds and the distribution of pair-energy are analyzed, which are obtained from the new model and other five reference potential models. Furthermore, the space-time correlation functions for dense fluid methane are also discussed. All the numerical results demonstrate that the new OPLS model could be well utilized to investigate the dense fluid methane.

  19. Dense, Viscous Brine Behavior in Heterogeneous Porous Medium Systems

    PubMed Central

    Wright, D. Johnson; Pedit, J.A.; Gasda, S.E.; Farthing, M.W.; Murphy, L.L.; Knight, S.R.; Brubaker, G.R.

    2010-01-01

    The behavior of dense, viscous calcium bromide brine solutions used to remediate systems contaminated with dense nonaqueous phase liquids (DNAPLs) is considered in laboratory and field porous medium systems. The density and viscosity of brine solutions are experimentally investigated and functional forms fit over a wide range of mass fractions. A density of 1.7 times, and a corresponding viscosity of 6.3 times, that of water is obtained at a calcium bromide mass fraction of 0.53. A three-dimensional laboratory cell is used to investigate the establishment, persistence, and rate of removal of a stratified dense brine layer in a controlled system. Results from a field-scale experiment performed at the Dover National Test Site are used to investigate the ability to establish and maintain a dense brine layer as a component of a DNAPL recovery strategy, and to recover the brine at sufficiently high mass fractions to support the economical reuse of the brine. The results of both laboratory and field experiments show that a dense brine layer can be established, maintained, and recovered to a significant extent. Regions of unstable density profiles are shown to develop and persist in the field-scale experiment, which we attribute to regions of low hydraulic conductivity. The saturated-unsaturated, variable-density ground-water flow simulation code SUTRA is modified to describe the system of interest, and used to compare simulations to experimental observations and to investigate certain unobserved aspects of these complex systems. The model results show that the standard model formulation is not appropriate for capturing the behavior of sharp density gradients observed during the dense brine experiments. PMID:20444520

  20. Dense, viscous brine behavior in heterogeneous porous medium systems.

    PubMed

    Wright, D Johnson; Pedit, J A; Gasda, S E; Farthing, M W; Murphy, L L; Knight, S R; Brubaker, G R; Miller, C T

    2010-06-25

    The behavior of dense, viscous calcium bromide brine solutions used to remediate systems contaminated with dense nonaqueous phase liquids (DNAPLs) is considered in laboratory and field porous medium systems. The density and viscosity of brine solutions are experimentally investigated and functional forms fit over a wide range of mass fractions. A density of 1.7 times, and a corresponding viscosity of 6.3 times, that of water is obtained at a calcium bromide mass fraction of 0.53. A three-dimensional laboratory cell is used to investigate the establishment, persistence, and rate of removal of a stratified dense brine layer in a controlled system. Results from a field-scale experiment performed at the Dover National Test Site are used to investigate the ability to establish and maintain a dense brine layer as a component of a DNAPL recovery strategy, and to recover the brine at sufficiently high mass fractions to support the economical reuse of the brine. The results of both laboratory and field experiments show that a dense brine layer can be established, maintained, and recovered to a significant extent. Regions of unstable density profiles are shown to develop and persist in the field-scale experiment, which we attribute to regions of low hydraulic conductivity. The saturated-unsaturated, variable-density groundwater flow simulation code SUTRA is modified to describe the system of interest, and used to compare simulations to experimental observations and to investigate certain unobserved aspects of these complex systems. The model results show that the standard model formulation is not appropriate for capturing the behavior of sharp density gradients observed during the dense brine experiments.

  1. Constructing Dense Graphs with Unique Hamiltonian Cycles

    ERIC Educational Resources Information Center

    Lynch, Mark A. M.

    2012-01-01

    It is not difficult to construct dense graphs containing Hamiltonian cycles, but it is difficult to generate dense graphs that are guaranteed to contain a unique Hamiltonian cycle. This article presents an algorithm for generating arbitrarily large simple graphs containing "unique" Hamiltonian cycles. These graphs can be turned into dense graphs…

  2. Uplink Coding

    NASA Technical Reports Server (NTRS)

    Pollara, Fabrizio; Hamkins, Jon; Dolinar, Sam; Andrews, Ken; Divsalar, Dariush

    2006-01-01

    This viewgraph presentation reviews uplink coding. The purpose and goals of the briefing are (1) Show a plan for using uplink coding and describe benefits (2) Define possible solutions and their applicability to different types of uplink, including emergency uplink (3) Concur with our conclusions so we can embark on a plan to use proposed uplink system (4) Identify the need for the development of appropriate technology and infusion in the DSN (5) Gain advocacy to implement uplink coding in flight projects Action Item EMB04-1-14 -- Show a plan for using uplink coding, including showing where it is useful or not (include discussion of emergency uplink coding).

  3. Optimization of geometry, material and economic parameters of a two-zone subcritical reactor for transmutation of nuclear waste with SERPENT Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Gulik, Volodymyr; Tkaczyk, Alan Henry

    2014-06-01

    An optimization study of a subcritical two-zone homogeneous reactor was carried out, taking into consideration geometry, material, and economic parameters. The advantage of a two-zone subcritical system over a single-zone system is demonstrated. The study investigated the optimal volume ratio for the inner and outer zones of the subcritical reactor, in terms of the neutron-physical parameters as well as fuel cost. Optimal geometrical parameters of the system are suggested for different material compositions.

  4. Probing Cold Dense Nuclear Matter

    NASA Astrophysics Data System (ADS)

    Subedi, R.; Shneor, R.; Monaghan, P.; Anderson, B. D.; Aniol, K.; Annand, J.; Arrington, J.; Benaoum, H.; Benmokhtar, F.; Boeglin, W.; Chen, J.-P.; Choi, Seonho; Cisbani, E.; Craver, B.; Frullani, S.; Garibaldi, F.; Gilad, S.; Gilman, R.; Glamazdin, O.; Hansen, J.-O.; Higinbotham, D. W.; Holmstrom, T.; Ibrahim, H.; Igarashi, R.; de Jager, C. W.; Jans, E.; Jiang, X.; Kaufman, L. J.; Kelleher, A.; Kolarkar, A.; Kumbartzki, G.; LeRose, J. J.; Lindgren, R.; Liyanage, N.; Margaziotis, D. J.; Markowitz, P.; Marrone, S.; Mazouz, M.; Meekins, D.; Michaels, R.; Moffit, B.; Perdrisat, C. F.; Piasetzky, E.; Potokar, M.; Punjabi, V.; Qiang, Y.; Reinhold, J.; Ron, G.; Rosner, G.; Saha, A.; Sawatzky, B.; Shahinyan, A.; Širca, S.; Slifer, K.; Solvignon, P.; Sulkosky, V.; Urciuoli, G. M.; Voutier, E.; Watson, J. W.; Weinstein, L. B.; Wojtsekhowski, B.; Wood, S.; Zheng, X.-C.; Zhu, L.

    2008-06-01

    The protons and neutrons in a nucleus can form strongly correlated nucleon pairs. Scattering experiments, in which a proton is knocked out of the nucleus with high-momentum transfer and high missing momentum, show that in carbon-12 the neutron-proton pairs are nearly 20 times as prevalent as proton-proton pairs and, by inference, neutron-neutron pairs. This difference between the types of pairs is due to the nature of the strong force and has implications for understanding cold dense nuclear systems such as neutron stars.

  5. Dilatons in Dense Baryonic Matter

    NASA Astrophysics Data System (ADS)

    Lee, Hyun Kyu; Rho, Mannque

    We discuss the role of dilaton, which is supposed to be representing a special feature of scale symmetry of QCD, trace anomaly, in dense baryonic matter. The idea that the scale symmetry breaking of QCD is responsible for the spontaneous breaking of chiral symmetry is presented along the similar spirit of Freund-Nambu model. The incorporation of dilaton field in the hidden local symmetric parity doublet model is briefly sketched with the possible role of dilaton at high density baryonic matter, the emergence of linear sigma model in dilaton limit.

  6. Robust Nonlinear Neural Codes

    NASA Astrophysics Data System (ADS)

    Yang, Qianli; Pitkow, Xaq

    2015-03-01

    Most interesting natural sensory stimuli are encoded in the brain in a form that can only be decoded nonlinearly. But despite being a core function of the brain, nonlinear population codes are rarely studied and poorly understood. Interestingly, the few existing models of nonlinear codes are inconsistent with known architectural features of the brain. In particular, these codes have information content that scales with the size of the cortical population, even if that violates the data processing inequality by exceeding the amount of information entering the sensory system. Here we provide a valid theory of nonlinear population codes by generalizing recent work on information-limiting correlations in linear population codes. Although these generalized, nonlinear information-limiting correlations bound the performance of any decoder, they also make decoding more robust to suboptimal computation, allowing many suboptimal decoders to achieve nearly the same efficiency as an optimal decoder. Although these correlations are extremely difficult to measure directly, particularly for nonlinear codes, we provide a simple, practical test by which one can use choice-related activity in small populations of neurons to determine whether decoding is suboptimal or optimal and limited by correlated noise. We conclude by describing an example computation in the vestibular system where this theory applies. QY and XP was supported by a grant from the McNair foundation.

  7. Inference by replication in densely connected systems

    SciTech Connect

    Neirotti, Juan P.; Saad, David

    2007-10-15

    An efficient Bayesian inference method for problems that can be mapped onto dense graphs is presented. The approach is based on message passing where messages are averaged over a large number of replicated variable systems exposed to the same evidential nodes. An assumption about the symmetry of the solutions is required for carrying out the averages; here we extend the previous derivation based on a replica-symmetric- (RS)-like structure to include a more complex one-step replica-symmetry-breaking-like (1RSB-like) ansatz. To demonstrate the potential of the approach it is employed for studying critical properties of the Ising linear perceptron and for multiuser detection in code division multiple access (CDMA) under different noise models. Results obtained under the RS assumption in the noncritical regime give rise to a highly efficient signal detection algorithm in the context of CDMA; while in the critical regime one observes a first-order transition line that ends in a continuous phase transition point. Finite size effects are also observed. While the 1RSB ansatz is not required for the original problems, it was applied to the CDMA signal detection problem with a more complex noise model that exhibits RSB behavior, resulting in an improvement in performance.

  8. Sharing code.

    PubMed

    Kubilius, Jonas

    2014-01-01

    Sharing code is becoming increasingly important in the wake of Open Science. In this review I describe and compare two popular code-sharing utilities, GitHub and Open Science Framework (OSF). GitHub is a mature, industry-standard tool but lacks focus towards researchers. In comparison, OSF offers a one-stop solution for researchers but a lot of functionality is still under development. I conclude by listing alternative lesser-known tools for code and materials sharing.

  9. On a dense winding of the 2-dimensional torus

    NASA Astrophysics Data System (ADS)

    Kiselev, D. D.

    2016-04-01

    An important role in the solution of a class of optimal control problems is played by a certain polynomial of degree 2(n-1) of special form with integer coefficients. The linear independence of a family of k special roots of this polynomial over {Q} implies the existence of a solution of the original problem with optimal control in the form of a dense winding of a k-dimensional Clifford torus, which is traversed in finite time. In this paper, it is proved that for every integer n>3 one can take k to be equal to 2.Bibliography: 6 titles.

  10. Pyramid image codes

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    1990-01-01

    All vision systems, both human and machine, transform the spatial image into a coded representation. Particular codes may be optimized for efficiency or to extract useful image features. Researchers explored image codes based on primary visual cortex in man and other primates. Understanding these codes will advance the art in image coding, autonomous vision, and computational human factors. In cortex, imagery is coded by features that vary in size, orientation, and position. Researchers have devised a mathematical model of this transformation, called the Hexagonal oriented Orthogonal quadrature Pyramid (HOP). In a pyramid code, features are segregated by size into layers, with fewer features in the layers devoted to large features. Pyramid schemes provide scale invariance, and are useful for coarse-to-fine searching and for progressive transmission of images. The HOP Pyramid is novel in three respects: (1) it uses a hexagonal pixel lattice, (2) it uses oriented features, and (3) it accurately models most of the prominent aspects of primary visual cortex. The transform uses seven basic features (kernels), which may be regarded as three oriented edges, three oriented bars, and one non-oriented blob. Application of these kernels to non-overlapping seven-pixel neighborhoods yields six oriented, high-pass pyramid layers, and one low-pass (blob) layer.

  11. Dynamics and evolution of dense stellar systems

    NASA Astrophysics Data System (ADS)

    Fregeau, John M.

    2004-10-01

    The research presented in this thesis comprises a theoretical study of several aspects relating to the dynamics and evolution of dense stellar systems such as globular clusters. First, I present the results of a study of mass segregation in two-component star clusters, based on a large number of numerical N-body simulations using our Monte-Carlo code. Heavy objects, which could represent stellar remnants such as neutron stars or black holes, exhibit behavior that is in quantitative agreement with simple analytical arguments. Light objects, which could represent free-floating planets or brown dwarfs, are predominantly lost from the cluster, as expected from simple analytical arguments, but may remain in the halo in larger numbers than expected. Using a recent null detection of planetary-mass microlensing events in M22, I find an upper limit of ˜25% at the 63% confidence level for the current mass fraction of M22 in the form of very low-mass objects. Turning to more realistic clusters, I present a study of the evolution of clusters containing primordial binaries, based on an enhanced version of the Monte-Carlo code that treats binary interactions via cross sections and analytical prescriptions. All models exhibit a long-lived “binary burning” phase lasting many tens of relaxation times. The structural parameters of the models during this phase match well those of most observed Galactic globular clusters. At the end of this phase, clusters that have survived tidal disruption undergo deep core collapse, followed by gravothermal oscillations. The results clearly show that the presence of even a small fraction of binaries in a cluster is sufficient to support the core against collapse significantly beyond the normal core collapse time predicted without the presence of binaries. For tidally truncated systems, collapse is delayed sufficiently that the cluster will undergo complete tidal disruption before core collapse. Moving a step beyond analytical prescriptions, I

  12. QPhiX Code Generator

    SciTech Connect

    Joo, Balint

    2014-09-16

    A simple code-generator to generate the low level code kernels used by the QPhiX Library for Lattice QCD. Generates Kernels for Wilson-Dslash, and Wilson-Clover kernels. Can be reused to write other optimized kernels for Intel Xeon Phi(tm), Intel Xeon(tm) and potentially other architectures.

  13. Uniformly dense polymeric foam body

    DOEpatents

    Whinnery, Jr., Leroy

    2003-07-15

    A method for providing a uniformly dense polymer foam body having a density between about 0.013 g/cm.sup.3 to about 0.5 g/cm.sup.3 is disclosed. The method utilizes a thermally expandable polymer microsphere material wherein some of the microspheres are unexpanded and some are only partially expanded. It is shown that by mixing the two types of materials in appropriate ratios to achieve the desired bulk final density, filling a mold with this mixture so as to displace all or essentially all of the internal volume of the mold, heating the mold for a predetermined interval at a temperature above about 130.degree. C., and then cooling the mold to a temperature below 80.degree. C. the molded part achieves a bulk density which varies by less then about .+-.6% everywhere throughout the part volume.

  14. Human Action Recognition Using Improved Salient Dense Trajectories.

    PubMed

    Li, Qingwu; Cheng, Haisu; Zhou, Yan; Huo, Guanying

    2016-01-01

    Human action recognition in videos is a topic of active research in computer vision. Dense trajectory (DT) features were shown to be efficient for representing videos in state-of-the-art approaches. In this paper, we present a more effective approach of video representation using improved salient dense trajectories: first, detecting the motion salient region and extracting the dense trajectories by tracking interest points in each spatial scale separately and then refining the dense trajectories via the analysis of the motion saliency. Then, we compute several descriptors (i.e., trajectory displacement, HOG, HOF, and MBH) in the spatiotemporal volume aligned with the trajectories. Finally, in order to represent the videos better, we optimize the framework of bag-of-words according to the motion salient intensity distribution and the idea of sparse coefficient reconstruction. Our architecture is trained and evaluated on the four standard video actions datasets of KTH, UCF sports, HMDB51, and UCF50, and the experimental results show that our approach performs competitively comparing with the state-of-the-art results. PMID:27293425

  15. Human Action Recognition Using Improved Salient Dense Trajectories

    PubMed Central

    Li, Qingwu; Cheng, Haisu; Zhou, Yan; Huo, Guanying

    2016-01-01

    Human action recognition in videos is a topic of active research in computer vision. Dense trajectory (DT) features were shown to be efficient for representing videos in state-of-the-art approaches. In this paper, we present a more effective approach of video representation using improved salient dense trajectories: first, detecting the motion salient region and extracting the dense trajectories by tracking interest points in each spatial scale separately and then refining the dense trajectories via the analysis of the motion saliency. Then, we compute several descriptors (i.e., trajectory displacement, HOG, HOF, and MBH) in the spatiotemporal volume aligned with the trajectories. Finally, in order to represent the videos better, we optimize the framework of bag-of-words according to the motion salient intensity distribution and the idea of sparse coefficient reconstruction. Our architecture is trained and evaluated on the four standard video actions datasets of KTH, UCF sports, HMDB51, and UCF50, and the experimental results show that our approach performs competitively comparing with the state-of-the-art results. PMID:27293425

  16. Optimized periodic verification testing blended risk and performance-based MOV inservice test program an application of ASME code case OMN-1

    SciTech Connect

    Sellers, C.; Fleming, K.; Bidwell, D.; Forbes, P.

    1996-12-01

    This paper presents an application of ASME Code Case OMN-1 to the GL 89-10 Program at the South Texas Project Electric Generating Station (STPEGS). Code Case OMN-1 provides guidance for a performance-based MOV inservice test program that can be used for periodic verification testing and allows consideration of risk insights. Blended probabilistic and deterministic evaluation techniques were used to establish inservice test strategies including both test methods and test frequency. Described in the paper are the methods and criteria for establishing MOV safety significance based on the STPEGS probabilistic safety assessment, deterministic considerations of MOV performance characteristics and performance margins, the expert panel evaluation process, and the development of inservice test strategies. Test strategies include a mix of dynamic and static testing as well as MOV exercising.

  17. Diagnostic of dense plasmas using X-ray spectra

    NASA Astrophysics Data System (ADS)

    Yu, Q. Z.; Zhang, J.; Li, Y. T.; Zhang, Z.; Jin, Z.; Lu, X.; Li, J.; Yu, Y. N.; Jiang, X. H.; Li, W. H.; Liu, S. Y.

    2005-12-01

    The spectrally and spatially resolved X-ray spectra emitted from a dense aluminum plasma produced by 500 J, 1 ns Nd:glass laser pulses are presented. Six primary hydrogen-like and helium-like lines are identified and simulated with the atomic physics code FLY. We find that the plasma is almost completely ionized under the experimental conditions. The highest electron density we measured reaches up to 1023 cm-3. The spatial variations of the electron temperature and density are compared with the simulations of MEDUSA hydrocode for different geometry targets. The results indicate that lateral expansion of the plasma produced with this laser beam plays an important role.

  18. The performance of dense medium processes

    SciTech Connect

    Horsfall, D.W.

    1993-12-31

    Dense medium washing in baths and cyclones is widely carried out in South Africa. The paper shows the reason for the preferred use of dense medium processes rather than gravity concentrators such as jigs. The factors leading to efficient separation in baths are listed and an indication given of the extent to which these factors may be controlled and embodied in the deployment of baths and dense medium cyclones in the planning stages of a plant.

  19. Understanding shape entropy through local dense packing.

    PubMed

    van Anders, Greg; Klotsa, Daphne; Ahmed, N Khalid; Engel, Michael; Glotzer, Sharon C

    2014-11-11

    Entropy drives the phase behavior of colloids ranging from dense suspensions of hard spheres or rods to dilute suspensions of hard spheres and depletants. Entropic ordering of anisotropic shapes into complex crystals, liquid crystals, and even quasicrystals was demonstrated recently in computer simulations and experiments. The ordering of shapes appears to arise from the emergence of directional entropic forces (DEFs) that align neighboring particles, but these forces have been neither rigorously defined nor quantified in generic systems. Here, we show quantitatively that shape drives the phase behavior of systems of anisotropic particles upon crowding through DEFs. We define DEFs in generic systems and compute them for several hard particle systems. We show they are on the order of a few times the thermal energy ([Formula: see text]) at the onset of ordering, placing DEFs on par with traditional depletion, van der Waals, and other intrinsic interactions. In experimental systems with these other interactions, we provide direct quantitative evidence that entropic effects of shape also contribute to self-assembly. We use DEFs to draw a distinction between self-assembly and packing behavior. We show that the mechanism that generates directional entropic forces is the maximization of entropy by optimizing local particle packing. We show that this mechanism occurs in a wide class of systems and we treat, in a unified way, the entropy-driven phase behavior of arbitrary shapes, incorporating the well-known works of Kirkwood, Onsager, and Asakura and Oosawa.

  20. Understanding shape entropy through local dense packing.

    PubMed

    van Anders, Greg; Klotsa, Daphne; Ahmed, N Khalid; Engel, Michael; Glotzer, Sharon C

    2014-11-11

    Entropy drives the phase behavior of colloids ranging from dense suspensions of hard spheres or rods to dilute suspensions of hard spheres and depletants. Entropic ordering of anisotropic shapes into complex crystals, liquid crystals, and even quasicrystals was demonstrated recently in computer simulations and experiments. The ordering of shapes appears to arise from the emergence of directional entropic forces (DEFs) that align neighboring particles, but these forces have been neither rigorously defined nor quantified in generic systems. Here, we show quantitatively that shape drives the phase behavior of systems of anisotropic particles upon crowding through DEFs. We define DEFs in generic systems and compute them for several hard particle systems. We show they are on the order of a few times the thermal energy ([Formula: see text]) at the onset of ordering, placing DEFs on par with traditional depletion, van der Waals, and other intrinsic interactions. In experimental systems with these other interactions, we provide direct quantitative evidence that entropic effects of shape also contribute to self-assembly. We use DEFs to draw a distinction between self-assembly and packing behavior. We show that the mechanism that generates directional entropic forces is the maximization of entropy by optimizing local particle packing. We show that this mechanism occurs in a wide class of systems and we treat, in a unified way, the entropy-driven phase behavior of arbitrary shapes, incorporating the well-known works of Kirkwood, Onsager, and Asakura and Oosawa. PMID:25344532

  1. Percolation in dense storage arrays

    NASA Astrophysics Data System (ADS)

    Kirkpatrick, Scott; Wilcke, Winfried W.; Garner, Robert B.; Huels, Harald

    2002-11-01

    As computers and their accessories become smaller, cheaper, and faster the providers of news, retail sales, and other services we now take for granted on the Internet have met their increasing computing needs by putting more and more computers, hard disks, power supplies, and the data communications linking them to each other and to the rest of the wired world into ever smaller spaces. This has created a new and quite interesting percolation problem. It is no longer desirable to fix computers, storage or switchgear which fail in such a dense array. Attempts to repair things are all too likely to make problems worse. The alternative approach, letting units “fail in place”, be removed from service and routed around, means that a data communications environment will evolve with an underlying regular structure but a very high density of missing pieces. Some of the properties of this kind of network can be described within the existing paradigm of site or bond percolation on lattices, but other important questions have not been explored. I will discuss 3D arrays of hundreds to thousands of storage servers (something which it is quite feasible to build in the next few years), and show that bandwidth, but not percolation fraction or shortest path lengths, is the critical factor affected by the “fail in place” disorder. Redundancy strategies traditionally employed in storage systems may have to be revised. Novel approaches to routing information among the servers have been developed to minimize the impact.

  2. Variable Coded Modulation software simulation

    NASA Astrophysics Data System (ADS)

    Sielicki, Thomas A.; Hamkins, Jon; Thorsen, Denise

    This paper reports on the design and performance of a new Variable Coded Modulation (VCM) system. This VCM system comprises eight of NASA's recommended codes from the Consultative Committee for Space Data Systems (CCSDS) standards, including four turbo and four AR4JA/C2 low-density parity-check codes, together with six modulations types (BPSK, QPSK, 8-PSK, 16-APSK, 32-APSK, 64-APSK). The signaling protocol for the transmission mode is based on a CCSDS recommendation. The coded modulation may be dynamically chosen, block to block, to optimize throughput.

  3. Design of a 100 J Dense Plasma Focus Z-pinch Device as a Portable Neutron Source

    NASA Astrophysics Data System (ADS)

    Jiang, Sheng; Higginson, Drew; Link, Anthony; Liu, Jason; Schmidt, Andrea

    2015-11-01

    The dense plasma focus (DPF) Z-pinch devices are capable of accelerating ions to high energies through MV/mm-scale electric fields. When deuterium is used as the filling gas, neutrons are generated through beam-target fusion when fast D beams collide with the bulk plasma. The neutron yield on a DPF scales favorably with current, and could be used as portable sources for active interrogation. Past DPF experiments have been optimized empirically. Here we use the particle-in-cell (PIC) code LSP to optimize a portable DPF for high neutron yield prior to building it. In this work, we are designing a DPF device with about 100 J of energy which can generate 106 - 107 neutrons. The simulations are run in the fluid mode for the rundown phase and are switched to kinetic to capture the anomalous resistivity and beam acceleration process during the pinch. A scan of driver parameters, anode geometries and gas pressures are studied to maximize the neutron yield. The optimized design is currently under construction. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and supported by the Laboratory Directed Research and Development Program (15-ERD-034) at LLNL.

  4. Dense packings of polyhedra: Platonic and Archimedean solids.

    PubMed

    Torquato, S; Jiao, Y

    2009-10-01

    Understanding the nature of dense particle packings is a subject of intense research in the physical, mathematical, and biological sciences. The preponderance of previous work has focused on spherical particles and very little is known about dense polyhedral packings. We formulate the problem of generating dense packings of nonoverlapping, nontiling polyhedra within an adaptive fundamental cell subject to periodic boundary conditions as an optimization problem, which we call the adaptive shrinking cell (ASC) scheme. This optimization problem is solved here (using a variety of multiparticle initial configurations) to find the dense packings of each of the Platonic solids in three-dimensional Euclidean space R3 , except for the cube, which is the only Platonic solid that tiles space. We find the densest known packings of tetrahedra, icosahedra, dodecahedra, and octahedra with densities 0.823..., 0.836..., 0.904..., and 0.947..., respectively. It is noteworthy that the densest tetrahedral packing possesses no long-range order. Unlike the densest tetrahedral packing, which must not be a Bravais lattice packing, the densest packings of the other nontiling Platonic solids that we obtain are their previously known optimal (Bravais) lattice packings. We also derive a simple upper bound on the maximal density of packings of congruent nonspherical particles and apply it to Platonic solids, Archimedean solids, superballs, and ellipsoids. Provided that what we term the "asphericity" (ratio of the circumradius to inradius) is sufficiently small, the upper bounds are relatively tight and thus close to the corresponding densities of the optimal lattice packings of the centrally symmetric Platonic and Archimedean solids. Our simulation results, rigorous upper bounds, and other theoretical arguments lead us to the conjecture that the densest packings of Platonic and Archimedean solids with central symmetry are given by their corresponding densest lattice packings. This can be

  5. Speech coding

    SciTech Connect

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  6. MCNP code

    SciTech Connect

    Cramer, S.N.

    1984-01-01

    The MCNP code is the major Monte Carlo coupled neutron-photon transport research tool at the Los Alamos National Laboratory, and it represents the most extensive Monte Carlo development program in the United States which is available in the public domain. The present code is the direct descendent of the original Monte Carlo work of Fermi, von Neumaum, and Ulam at Los Alamos in the 1940s. Development has continued uninterrupted since that time, and the current version of MCNP (or its predecessors) has always included state-of-the-art methods in the Monte Carlo simulation of radiation transport, basic cross section data, geometry capability, variance reduction, and estimation procedures. The authors of the present code have oriented its development toward general user application. The documentation, though extensive, is presented in a clear and simple manner with many examples, illustrations, and sample problems. In addition to providing the desired results, the output listings give a a wealth of detailed information (some optional) concerning each state of the calculation. The code system is continually updated to take advantage of advances in computer hardware and software, including interactive modes of operation, diagnostic interrupts and restarts, and a variety of graphical and video aids.

  7. QR Codes

    ERIC Educational Resources Information Center

    Lai, Hsin-Chih; Chang, Chun-Yen; Li, Wen-Shiane; Fan, Yu-Lin; Wu, Ying-Tien

    2013-01-01

    This study presents an m-learning method that incorporates Integrated Quick Response (QR) codes. This learning method not only achieves the objectives of outdoor education, but it also increases applications of Cognitive Theory of Multimedia Learning (CTML) (Mayer, 2001) in m-learning for practical use in a diverse range of outdoor locations. When…

  8. TDRSS telecommunication system PN code analysis

    NASA Technical Reports Server (NTRS)

    Gold, R.

    1977-01-01

    The pseudonoise (PN) code library for the Tracking and Data Relay Satellite System (TDRSS) Services was defined and described. The code library was chosen to minimize user transponder hardware requirements and optimize system performance. Special precautions were taken to insure sufficient code phase separation to minimize cross-correlation sidelobes, and to avoid the generation of spurious code components which would interfere with system performance.

  9. Wide Variation Seen in 'Dense' Breast Diagnoses

    MedlinePlus

    ... defined mammography patients' breasts as dense. Higher breast density is a risk factor for breast cancer, experts ... could have implications for the so-called breast density notification laws that have been passed in about ...

  10. Dynamical theory of dense groups of galaxies

    NASA Technical Reports Server (NTRS)

    Mamon, Gary A.

    1990-01-01

    It is well known that galaxies associate in groups and clusters. Perhaps 40% of all galaxies are found in groups of 4 to 20 galaxies (e.g., Tully 1987). Although most groups appear to be so loose that the galaxy interactions within them ought to be insignificant, the apparently densest groups, known as compact groups appear so dense when seen in projection onto the plane of the sky that their members often overlap. These groups thus appear as dense as the cores of rich clusters. The most popular catalog of compact groups, compiled by Hickson (1982), includes isolation among its selection critera. Therefore, in comparison with the cores of rich clusters, Hickson's compact groups (HCGs) appear to be the densest isolated regions in the Universe (in galaxies per unit volume), and thus provide in principle a clean laboratory for studying the competition of very strong gravitational interactions. The $64,000 question here is then: Are compact groups really bound systems as dense as they appear? If dense groups indeed exist, then one expects that each of the dynamical processes leading to the interaction of their member galaxies should be greatly enhanced. This leads us to the questions: How stable are dense groups? How do they form? And the related question, fascinating to any theorist: What dynamical processes predominate in dense groups of galaxies? If HCGs are not bound dense systems, but instead 1D change alignments (Mamon 1986, 1987; Walke & Mamon 1989) or 3D transient cores (Rose 1979) within larger looser systems of galaxies, then the relevant question is: How frequent are chance configurations within loose groups? Here, the author answers these last four questions after comparing in some detail the methods used and the results obtained in the different studies of dense groups.

  11. Magnetic Phases in Dense Quark Matter

    SciTech Connect

    Incera, Vivian de la

    2007-10-26

    In this paper I discuss the magnetic phases of the three-flavor color superconductor. These phases can take place at different field strengths in a highly dense quark system. Given that the best natural candidates for the realization of color superconductivity are the extremely dense cores of neutron stars, which typically have very large magnetic fields, the magnetic phases here discussed could have implications for the physics of these compact objects.

  12. METHOD OF PRODUCING DENSE CONSOLIDATED METALLIC REGULUS

    DOEpatents

    Magel, T.T.

    1959-08-11

    A methcd is presented for reducing dense metal compositions while simultaneously separating impurities from the reduced dense metal and casting the reduced parified dense metal, such as uranium, into well consolidated metal ingots. The reduction is accomplished by heating the dense metallic salt in the presence of a reducing agent, such as an alkali metal or alkaline earth metal in a bomb type reacting chamber, while applying centrifugal force on the reacting materials. Separation of the metal from the impurities is accomplished essentially by the incorporation of a constricted passageway at the vertex of a conical reacting chamber which is in direct communication with a collecting chamber. When a centrifugal force is applled to the molten metal and slag from the reduction in a direction collinear with the axis of the constricted passage, the dense molten metal is forced therethrough while the less dense slag is retained within the reaction chamber, resulting in a simultaneous separation of the reduced molten metal from the slag and a compacting of the reduced metal in a homogeneous mass.

  13. Mycobacterial RNA isolation optimized for non-coding RNA: high fidelity isolation of 5S rRNA from Mycobacterium bovis BCG reveals novel post-transcriptional processing and a complete spectrum of modified ribonucleosides

    PubMed Central

    Hia, Fabian; Chionh, Yok Hian; Pang, Yan Ling Joy; DeMott, Michael S.; McBee, Megan E.; Dedon, Peter C.

    2015-01-01

    A major challenge in the study of mycobacterial RNA biology is the lack of a comprehensive RNA isolation method that overcomes the unusual cell wall to faithfully yield the full spectrum of non-coding RNA (ncRNA) species. Here, we describe a simple and robust procedure optimized for the isolation of total ncRNA, including 5S, 16S and 23S ribosomal RNA (rRNA) and tRNA, from mycobacteria, using Mycobacterium bovis BCG to illustrate the method. Based on a combination of mechanical disruption and liquid and solid-phase technologies, the method produces all major species of ncRNA in high yield and with high integrity, enabling direct chemical and sequence analysis of the ncRNA species. The reproducibility of the method with BCG was evident in bioanalyzer electrophoretic analysis of isolated RNA, which revealed quantitatively significant differences in the ncRNA profiles of exponentially growing and non-replicating hypoxic bacilli. The method also overcame an historical inconsistency in 5S rRNA isolation, with direct sequencing revealing a novel post-transcriptional processing of 5S rRNA to its functional form and with chemical analysis revealing seven post-transcriptional ribonucleoside modifications in the 5S rRNA. This optimized RNA isolation procedure thus provides a means to more rigorously explore the biology of ncRNA species in mycobacteria. PMID:25539917

  14. Mycobacterial RNA isolation optimized for non-coding RNA: high fidelity isolation of 5S rRNA from Mycobacterium bovis BCG reveals novel post-transcriptional processing and a complete spectrum of modified ribonucleosides.

    PubMed

    Hia, Fabian; Chionh, Yok Hian; Pang, Yan Ling Joy; DeMott, Michael S; McBee, Megan E; Dedon, Peter C

    2015-03-11

    A major challenge in the study of mycobacterial RNA biology is the lack of a comprehensive RNA isolation method that overcomes the unusual cell wall to faithfully yield the full spectrum of non-coding RNA (ncRNA) species. Here, we describe a simple and robust procedure optimized for the isolation of total ncRNA, including 5S, 16S and 23S ribosomal RNA (rRNA) and tRNA, from mycobacteria, using Mycobacterium bovis BCG to illustrate the method. Based on a combination of mechanical disruption and liquid and solid-phase technologies, the method produces all major species of ncRNA in high yield and with high integrity, enabling direct chemical and sequence analysis of the ncRNA species. The reproducibility of the method with BCG was evident in bioanalyzer electrophoretic analysis of isolated RNA, which revealed quantitatively significant differences in the ncRNA profiles of exponentially growing and non-replicating hypoxic bacilli. The method also overcame an historical inconsistency in 5S rRNA isolation, with direct sequencing revealing a novel post-transcriptional processing of 5S rRNA to its functional form and with chemical analysis revealing seven post-transcriptional ribonucleoside modifications in the 5S rRNA. This optimized RNA isolation procedure thus provides a means to more rigorously explore the biology of ncRNA species in mycobacteria.

  15. Computational experience with a dense column feature for interior-point methods

    SciTech Connect

    Wenzel, M.; Czyzyk, J.; Wright, S.

    1997-08-01

    Most software that implements interior-point methods for linear programming formulates the linear algebra at each iteration as a system of normal equations. This approach can be extremely inefficient when the constraint matrix has dense columns, because the density of the normal equations matrix is much greater than the constraint matrix and the system is expensive to solve. In this report the authors describe a more efficient approach for this case, that involves handling the dense columns by using a Schur-complement method and conjugate gradient interaction. The authors report numerical results with the code PCx, into which the technique now has been incorporated.

  16. HERCULES: A Pattern Driven Code Transformation System

    SciTech Connect

    Kartsaklis, Christos; Hernandez, Oscar R; Hsu, Chung-Hsing; Ilsche, Thomas; Joubert, Wayne; Graham, Richard L

    2012-01-01

    New parallel computers are emerging, but developing efficient scientific code for them remains difficult. A scientist must manage not only the science-domain complexity but also the performance-optimization complexity. HERCULES is a code transformation system designed to help the scientist to separate the two concerns, which improves code maintenance, and facilitates performance optimization. The system combines three technologies, code patterns, transformation scripts and compiler plugins, to provide the scientist with an environment to quickly implement code transformations that suit his needs. Unlike existing code optimization tools, HERCULES is unique in its focus on user-level accessibility. In this paper we discuss the design, implementation and an initial evaluation of HERCULES.

  17. MHD modeling of dense plasma focus electrode shape variation

    NASA Astrophysics Data System (ADS)

    McLean, Harry; Hartman, Charles; Schmidt, Andrea; Tang, Vincent; Link, Anthony; Ellsworth, Jen; Reisman, David

    2013-10-01

    The dense plasma focus (DPF) is a very simple device physically, but results to date indicate that very extensive physics is needed to understand the details of operation, especially during the final pinch where kinetic effects become very important. Nevertheless, the overall effects of electrode geometry, electrode size, and drive circuit parameters can be informed efficiently using MHD fluid codes, especially in the run-down phase before the final pinch. These kinds of results can then guide subsequent, more detailed fully kinetic modeling efforts. We report on resistive 2-d MHD modeling results applying the TRAC-II code to the DPF with an emphasis on varying anode and cathode shape. Drive circuit variations are handled in the code using a self-consistent circuit model for the external capacitor bank since the device impedance is strongly coupled to the internal plasma physics. Electrode shape is characterized by the ratio of inner diameter to outer diameter, length to diameter, and various parameterizations for tapering. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  18. Maxmin lambda allocation for dense wavelength-division-multiplexing networks

    NASA Astrophysics Data System (ADS)

    Tsai, Wei K.; Ros, Jordi

    2002-08-01

    We present a heuristic for solving the discrete maximum-minimum (maxmin) rates for dense WDM- (DWDM-) based optical subnetworks. Discrete maxmin allocation is proposed here as the preferred way of assigning wavelengths to the flows found to be suitable for lightpath switching. The discrete maxmin optimality condition is shown to be a unifying principle underlying both the continuous maxmin and discrete maxmin optimality conditions. Among the many discrete maxmin solutions for each assignment problem, lexicographic optimal solutions can be argued to be the best in the true sense of maxmin. However, the problem of finding lexicographic optimal solutions is known to be NP-complete (NP is the class that a nondeterministic Turing machine accepts in polynomial time). The heuristic proposed here is tested against all possible networks such that |Gj + jW| £ 10, where G and W are the set of links and the set of flows of the network, respectively. From 1,084,112 possible networks, the heuristic produces the exact lexicographic solutions with 99.8% probability. Furthermore, for 0.2% cases in which the solutions are nonoptimal, 99.8% of these solutions are within the minimal possible distance from the true lexicographic optimal solutions.

  19. ETR/ITER systems code

    SciTech Connect

    Barr, W.L.; Bathke, C.G.; Brooks, J.N.; Bulmer, R.H.; Busigin, A.; DuBois, P.F.; Fenstermacher, M.E.; Fink, J.; Finn, P.A.; Galambos, J.D.; Gohar, Y.; Gorker, G.E.; Haines, J.R.; Hassanein, A.M.; Hicks, D.R.; Ho, S.K.; Kalsi, S.S.; Kalyanam, K.M.; Kerns, J.A.; Lee, J.D.; Miller, J.R.; Miller, R.L.; Myall, J.O.; Peng, Y-K.M.; Perkins, L.J.; Spampinato, P.T.; Strickler, D.J.; Thomson, S.L.; Wagner, C.E.; Willms, R.S.; Reid, R.L.

    1988-04-01

    A tokamak systems code capable of modeling experimental test reactors has been developed and is described in this document. The code, named TETRA (for Tokamak Engineering Test Reactor Analysis), consists of a series of modules, each describing a tokamak system or component, controlled by an optimizer/driver. This code development was a national effort in that the modules were contributed by members of the fusion community and integrated into a code by the Fusion Engineering Design Center. The code has been checked out on the Cray computers at the National Magnetic Fusion Energy Computing Center and has satisfactorily simulated the Tokamak Ignition/Burn Experimental Reactor II (TIBER) design. A feature of this code is the ability to perform optimization studies through the use of a numerical software package, which iterates prescribed variables to satisfy a set of prescribed equations or constraints. This code will be used to perform sensitivity studies for the proposed International Thermonuclear Experimental Reactor (ITER). 22 figs., 29 tabs.

  20. Formation and evolution of black holes in dense star clusters

    NASA Astrophysics Data System (ADS)

    Goswami, Sanghamitra

    Using supercomputer simulations combining stellar dynamics and stellar evolution, we have studied various problems related to the existence of black holes in dense star clusters. We consider both stellar and intermediate-mass black holes, and we focus on massive, dense star clusters, such as old globular clusters and young, so called "super star clusters." The first problem concerns the formation of intermediate-mass black holes in young clusters through the runaway collision instability. A promising mechanism to form intermediate-mass black holes (IMBHs) is runaway mergers in dense star clusters, where main-sequence stars collide re- peatedly and form a very massive star (VMS), which then collapses to a black hole (BH). Here we study the effects of primordial mass segregation and the importance of the stellar initial mass function (IMF) on the runaway growth of VMSs using a dynamical Monte Carlo code to model systems with N as high as 10^6 stars. Our Monte Carlo code includes an explicittreatment of all stellar collisions. We place special emphasis on the possibility of top-heavy IMFs, as observed in some very young massive clusters. We find that both primordial mass segregation and the shape of the IMF affect the rate of core collapse of star clusters and thus the time of the runaway. When we include primordial mass segregation we generally see a decrease in core collapse time (tcc). Although for smaller degrees of primordial mass segregation this decrease in tcc is mostly due to the change in the density profile of the cluster, for highly mass-segregated (primordial) clusters, it is the increase in the average mass in the core which reduces the central relaxation time, decreasing tcc. Finally, flatter IMFs generally increase the average mass in the whole cluster, which increases tcc. For the range of IMFs investigated in this thesis, this increase in tcc is to some degree balanced by stellar collisions, which accelerate core collapse. Thus there is no

  1. MPQC: Performance Analysis and Optimization

    SciTech Connect

    Sarje, Abhinav; Williams, Samuel; Bailey, David

    2012-11-30

    MPQC (Massively Parallel Quantum Chemistry) is a widely used computational quantum chemistry code. It is capable of performing a number of computations commonly occurring in quantum chemistry. In order to achieve better performance of MPQC, in this report we present a detailed performance analysis of this code. We then perform loop and memory access optimizations, and measure performance improvements by comparing the performance of the optimized code with that of the original MPQC code. We observe that the optimized MPQC code achieves a significant improvement in the performance through a better utilization of vector processing and memory hierarchies.

  2. Accessibility of electron Bernstein modes in over-dense plasma

    SciTech Connect

    Carter, M. D.; Bigelow, T. S.; Batchelor, D. B.

    1999-09-20

    Mode-conversion between the ordinary, extraordinary and electron Bernstein modes near the plasma edge may allow signals generated by electrons in an over-dense plasma to be detected. Alternatively, high frequency power may gain accessibility to the core plasma through this mode conversion process. Many of the tools used for ion cyclotron antenna design can also be applied near the electron cyclotron frequency. In this paper, we investigate the the possibilities for an antenna that may couple to electron Bernstein modes inside an over-dense plasma. The optimum values for wavelengths that undergo mode-conversion are found by scanning the poloidal and toroidal response of the plasma using a warm plasma slab approximation with a sheared magnetic field. Only a very narrow region of the edge can be examined in this manner; however, ray tracing may be used to follow the mode converted power in a more general geometry. It is eventually hoped that the methods can be extended to a hot plasma representation. Using antenna design codes, some basic antenna shapes will be considered to see what types of antennas might be used to detect or launch modes that penetrate the cutoff layer in the edge plasma. (c) 1999 American Institute of Physics.

  3. Accessibillity of Electron Bernstein Modes in Over-Dense Plasma

    SciTech Connect

    Batchelor, D.B.; Bigelow, T.S.; Carter, M.D.

    1999-04-12

    Mode-conversion between the ordinary, extraordinary and electron Bernstein modes near the plasma edge may allow signals generated by electrons in an over-dense plasma to be detected. Alternatively, high frequency power may gain accessibility to the core plasma through this mode conversion process. Many of the tools used for ion cyclotron antenna de-sign can also be applied near the electron cyclotron frequency. In this paper, we investigate the possibilities for an antenna that may couple to electron Bernstein modes inside an over-dense plasma. The optimum values for wavelengths that undergo mode-conversion are found by scanning the poloidal and toroidal response of the plasma using a warm plasma slab approximation with a sheared magnetic field. Only a very narrow region of the edge can be examined in this manner; however, ray tracing may be used to follow the mode converted power in a more general geometry. It is eventually hoped that the methods can be extended to a hot plasma representation. Using antenna design codes, some basic antenna shapes will be considered to see what types of antennas might be used to detect or launch modes that penetrate the cutoff layer in the edge plasma.

  4. H3+ in dense and diffuse clouds.

    PubMed

    McCall, B J; Hinkle, K H; Geballe, T R; Oka, T

    1998-01-01

    Interstellar H3+ has been detected in dense as well as diffuse clouds using three 3.7 microns infrared spectral lines of the nu 2 fundamental band. Column densities of H3+ from (1.7-5.5) x 10(14) cm-2 have been measured in dense clouds in absorption against the infrared continua of the deeply embedded young stellar objects GL2136, W33A, MonR2 IRS 3, GL961E, and GL2591. Strong and broad H3+ absorptions have been detected in dense and diffuse clouds towards GC IRS 3 and GCS3-2 in the region of the galactic center. A large column density of H3+, comparable to that of a dense cloud, has been detected towards the visible star Cygnus OB2 No. 12, which has a line of sight that crosses mostly diffuse clouds. The H3+ chemistry of dense and diffuse clouds are discussed using a very simple model. Some future projects and problems are discussed.

  5. Coalescence preference in densely packed microbubbles

    SciTech Connect

    Kim, Yeseul; Lim, Su Jin; Gim, Bopil; Weon, Byung Mook

    2015-01-13

    A bubble merged from two parent bubbles with different size tends to be placed closer to the larger parent. This phenomenon is known as the coalescence preference. Here we demonstrate that the coalescence preference can be blocked inside a densely packed cluster of bubbles. We utilized high-speed high-resolution X-ray microscopy to clearly visualize individual coalescence events inside densely packed microbubbles with a local packing fraction of ~40%. Thus, the surface energy release theory predicts an exponent of 5 in a relation between the relative coalescence position and the parent size ratio, whereas our observation for coalescence in densely packed microbubbles shows a different exponent of 2. We believe that this result would be important to understand the reality of coalescence dynamics in a variety of packing situations of soft matter.

  6. Coalescence preference in densely packed microbubbles

    DOE PAGESBeta

    Kim, Yeseul; Lim, Su Jin; Gim, Bopil; Weon, Byung Mook

    2015-01-13

    A bubble merged from two parent bubbles with different size tends to be placed closer to the larger parent. This phenomenon is known as the coalescence preference. Here we demonstrate that the coalescence preference can be blocked inside a densely packed cluster of bubbles. We utilized high-speed high-resolution X-ray microscopy to clearly visualize individual coalescence events inside densely packed microbubbles with a local packing fraction of ~40%. Thus, the surface energy release theory predicts an exponent of 5 in a relation between the relative coalescence position and the parent size ratio, whereas our observation for coalescence in densely packed microbubblesmore » shows a different exponent of 2. We believe that this result would be important to understand the reality of coalescence dynamics in a variety of packing situations of soft matter.« less

  7. Propagation of light in a Dense Medium

    NASA Astrophysics Data System (ADS)

    Masood, Samina; Saleem, Iram

    Propagation of light is studied in a very dense system. Renormalization scheme of QED is used to understand the propagation of light in a hot and dense medium. We consider a medium of a very large chemical potential with relatively small temperature. The generalized results of vacuum polarization of photon in a hot and dense medium is used to study the behavior of light in such a system. Our hypothetical system corresponds to a heat bath of electrons at an equilibrium temperature and the density of electrons is larger as compared to the temperature of the medium. Such type of systems have previously been identified as classical systems because the chemical potential is large enough to dominate temperature.

  8. Error coding simulations in C

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1994-01-01

    When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.

  9. Error coding simulations in C

    NASA Astrophysics Data System (ADS)

    Noble, Viveca K.

    1994-10-01

    When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.

  10. Ab Initio Simulations of Dense Helium Plasmas

    SciTech Connect

    Wang Cong; He Xiantu; Zhang Ping

    2011-04-08

    We study the thermophysical properties of dense helium plasmas by using quantum molecular dynamics and orbital-free molecular dynamics simulations, where densities are considered from 400 to 800 g/cm{sup 3} and temperatures up to 800 eV. Results are presented for the equation of state. From the Kubo-Greenwood formula, we derive the electrical conductivity and electronic thermal conductivity. In particular, with the increase in temperature, we discuss the change in the Lorenz number, which indicates a transition from strong coupling and degenerate state to moderate coupling and partial degeneracy regime for dense helium.

  11. Ion Beam Heated Target Simulations for Warm Dense Matter Physics and Inertial Fusion Energy

    SciTech Connect

    Barnard, J J; Armijo, J; Bailey, D S; Friedman, A; Bieniosek, F M; Henestroza, E; Kaganovich, I; Leung, P T; Logan, B G; Marinak, M M; More, R M; Ng, S F; Penn, G E; Perkins, L J; Veitzer, S; Wurtele, J S; Yu, S S; Zylstra, A B

    2008-08-12

    Hydrodynamic simulations have been carried out using the multi-physics radiation hydrodynamics code HYDRA and the simplified one-dimensional hydrodynamics code DISH. We simulate possible targets for a near-term experiment at LBNL (the Neutralized Drift Compression Experiment, NDCX) and possible later experiments on a proposed facility (NDCX-II) for studies of warm dense matter and inertial fusion energy related beam-target coupling. Simulations of various target materials (including solids and foams) are presented. Experimental configurations include single pulse planar metallic solid and foam foils. Concepts for double-pulsed and ramped-energy pulses on cryogenic targets and foams have been simulated for exploring direct drive beam target coupling, and concepts and simulations for collapsing cylindrical and spherical bubbles to enhance temperature and pressure for warm dense matter studies are described.

  12. ION BEAM HEATED TARGET SIMULATIONS FOR WARM DENSE MATTER PHYSICS AND INERTIAL FUSION ENERGY

    SciTech Connect

    Barnard, J.J.; Armijo, J.; Bailey, D.S.; Friedman, A.; Bieniosek, F.M.; Henestroza, E.; Kaganovich, I.; Leung, P.T.; Logan, B.G.; Marinak, M.M.; More, R.M.; Ng, S.F.; Penn, G.E.; Perkins, L.J.; Veitzer, S.; Wurtele, J.S.; Yu, S.S.; Zylstra, A.B.

    2008-08-01

    Hydrodynamic simulations have been carried out using the multi-physics radiation hydrodynamics code HYDRA and the simplified one-dimensional hydrodynamics code DISH. We simulate possible targets for a near-term experiment at LBNL (the Neutralized Drift Compression Experiment, NDCX) and possible later experiments on a proposed facility (NDCX-II) for studies of warm dense matter and inertial fusion energy related beam-target coupling. Simulations of various target materials (including solids and foams) are presented. Experimental configurations include single pulse planar metallic solid and foam foils. Concepts for double-pulsed and ramped-energy pulses on cryogenic targets and foams have been simulated for exploring direct drive beam target coupling, and concepts and simulations for collapsing cylindrical and spherical bubbles to enhance temperature and pressure for warm dense matter studies are described.

  13. Adaptive Source Coding Schemes for Geometrically Distributed Integer Alphabets

    NASA Technical Reports Server (NTRS)

    Cheung, K-M.; Smyth, P.

    1993-01-01

    Revisit the Gallager and van Voorhis optimal source coding scheme for geometrically distributed non-negative integer alphabets and show that the various subcodes in the popular Rice algorithm can be derived from the Gallager and van Voorhis code.

  14. DENSE NONAQUEOUS PHASE LIQUIDS -- A WORKSHOP SUMMARY

    EPA Science Inventory

    site characterization, and, therefore, DNAPL remediation, can be expected. Dense nonaqueous phase liquids (DNAPLs) in the subsurface are long-term sources of ground-water contamination, and may persist for centuries before dissolving completely in adjacent ground water. In respo...

  15. The Southern California Dense GPS Geodetic Array

    NASA Technical Reports Server (NTRS)

    Webb, F.

    1994-01-01

    The Southern California Earthquake Center is coordinating a effort by scientists at the Jet Propulsion Laboratory, the U.S. Geological Survey, and various academic institutions to establish a dense 250 station, continuously recording GPS geodetic array in southern California for measuring crustal deformation associated with slip on the numerous faults that underlie the major metropolitan areas of southern california.

  16. Coalescence preference in dense packing of bubbles

    NASA Astrophysics Data System (ADS)

    Kim, Yeseul; Gim, Bopil; Gim, Bopil; Weon, Byung Mook

    2015-11-01

    Coalescence preference is the tendency that a merged bubble from the contact of two original bubbles (parent) tends to be near to the bigger parent. Here, we show that the coalescence preference can be blocked by densely packing of neighbor bubbles. We use high-speed high-resolution X-ray microscopy to clearly visualize individual coalescence phenomenon which occurs in micro scale seconds and inside dense packing of microbubbles with a local packing fraction of ~40%. Previous theory and experimental evidence predict a power of -5 between the relative coalescence position and the parent size. However, our new observation for coalescence preference in densely packed microbubbles shows a different power of -2. We believe that this result may be important to understand coalescence dynamics in dense packing of soft matter. This work (NRF-2013R1A22A04008115) was supported by Mid-career Researcher Program through NRF grant funded by the MEST and also was supported by Ministry of Science, ICT and Future Planning (2009-0082580) and by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry and Education, Science and Technology (NRF-2012R1A6A3A04039257).

  17. Preparation of a dense, polycrystalline ceramic structure

    DOEpatents

    Cooley, Jason; Chen, Ching-Fong; Alexander, David

    2010-12-07

    Ceramic nanopowder was sealed inside a metal container under a vacuum. The sealed evacuated container was forced through a severe deformation channel at an elevated temperature below the melting point of the ceramic nanopowder. The result was a dense nanocrystalline ceramic structure inside the metal container.

  18. Understanding neutron production in the deuterium dense plasma focus

    SciTech Connect

    Appelbe, Brian E-mail: j.chittenden@imperial.ac.uk; Chittenden, Jeremy E-mail: j.chittenden@imperial.ac.uk

    2014-12-15

    The deuterium Dense Plasma Focus (DPF) can produce copious amounts of MeV neutrons and can be used as an efficient neutron source. However, the mechanism by which neutrons are produced within the DPF is poorly understood and this limits our ability to optimize the device. In this paper we present results from a computational study aimed at understanding how neutron production occurs in DPFs with a current between 70 kA and 500 kA and which parameters can affect it. A combination of MHD and kinetic tools are used to model the different stages of the DPF implosion. It is shown that the anode shape can significantly affect the structure of the imploding plasma and that instabilities in the implosion lead to the generation of large electric fields at stagnation. These electric fields can accelerate deuterium ions within the stagnating plasma to large (>100 keV) energies leading to reactions with ions in the cold dense plasma. It is shown that the electromagnetic fields present can significantly affect the trajectories of the accelerated ions and the resulting neutron production.

  19. Understanding neutron production in the deuterium dense plasma focus

    NASA Astrophysics Data System (ADS)

    Appelbe, Brian; Chittenden, Jeremy

    2014-12-01

    The deuterium Dense Plasma Focus (DPF) can produce copious amounts of MeV neutrons and can be used as an efficient neutron source. However, the mechanism by which neutrons are produced within the DPF is poorly understood and this limits our ability to optimize the device. In this paper we present results from a computational study aimed at understanding how neutron production occurs in DPFs with a current between 70 kA and 500 kA and which parameters can affect it. A combination of MHD and kinetic tools are used to model the different stages of the DPF implosion. It is shown that the anode shape can significantly affect the structure of the imploding plasma and that instabilities in the implosion lead to the generation of large electric fields at stagnation. These electric fields can accelerate deuterium ions within the stagnating plasma to large (>100 keV) energies leading to reactions with ions in the cold dense plasma. It is shown that the electromagnetic fields present can significantly affect the trajectories of the accelerated ions and the resulting neutron production.

  20. Revisiting Intrinsic Curves for Efficient Dense Stereo Matching

    NASA Astrophysics Data System (ADS)

    Shahbazi, M.; Sohn, G.; Théau, J.; Ménard, P.

    2016-06-01

    Dense stereo matching is one of the fundamental and active areas of photogrammetry. The increasing image resolution of digital cameras as well as the growing interest in unconventional imaging, e.g. unmanned aerial imagery, has exposed stereo image pairs to serious occlusion, noise and matching ambiguity. This has also resulted in an increase in the range of disparity values that should be considered for matching. Therefore, conventional methods of dense matching need to be revised to achieve higher levels of efficiency and accuracy. In this paper, we present an algorithm that uses the concepts of intrinsic curves to propose sparse disparity hypotheses for each pixel. Then, the hypotheses are propagated to adjoining pixels by label-set enlargement based on the proximity in the space of intrinsic curves. The same concepts are applied to model occlusions explicitly via a regularization term in the energy function. Finally, a global optimization stage is performed using belief-propagation to assign one of the disparity hypotheses to each pixel. By searching only through a small fraction of the whole disparity search space and handling occlusions and ambiguities, the proposed framework could achieve high levels of accuracy and efficiency.

  1. Efficiently dense hierarchical graphene based aerogel electrode for supercapacitors

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Lu, Chengxing; Peng, Huifen; Zhang, Xin; Wang, Zhenkun; Wang, Gongkai

    2016-08-01

    Boosting gravimetric and volumetric capacitances simultaneously at a high rate is still a discrepancy in development of graphene based supercapacitors. We report the preparation of dense hierarchical graphene/activated carbon composite aerogels via a reduction induced self-assembly process coupled with a drying post treatment. The compact and porous structures of composite aerogels could be maintained. The drying post treatment has significant effects on increasing the packing density of aerogels. The introduced activated carbons play the key roles of spacers and bridges, mitigating the restacking of adjacent graphene nanosheets and connecting lateral and vertical graphene nanosheets, respectively. The optimized aerogel with a packing density of 0.67 g cm-3 could deliver maximum gravimetric and volumetric capacitances of 128.2 F g-1 and 85.9 F cm-3, respectively, at a current density of 1 A g-1 in aqueous electrolyte, showing no apparent degradation to the specific capacitance at a current density of 10 A g-1 after 20000 cycles. The corresponding gravimetric and volumetric capacitances of 116.6 F g-1 and 78.1 cm-3 with an acceptable cyclic stability are also achieved in ionic liquid electrolyte. The results show a feasible strategy of designing dense hierarchical graphene based aerogels for supercapacitors.

  2. Efficiently dense hierarchical graphene based aerogel electrode for supercapacitors

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Lu, Chengxing; Peng, Huifen; Zhang, Xin; Wang, Zhenkun; Wang, Gongkai

    2016-08-01

    Boosting gravimetric and volumetric capacitances simultaneously at a high rate is still a discrepancy in development of graphene based supercapacitors. We report the preparation of dense hierarchical graphene/activated carbon composite aerogels via a reduction induced self-assembly process coupled with a drying post treatment. The compact and porous structures of composite aerogels could be maintained. The drying post treatment has significant effects on increasing the packing density of aerogels. The introduced activated carbons play the key roles of spacers and bridges, mitigating the restacking of adjacent graphene nanosheets and connecting lateral and vertical graphene nanosheets, respectively. The optimized aerogel with a packing density of 0.67 g cm-3 could deliver maximum gravimetric and volumetric capacitances of 128.2 F g-1 and 85.9 F cm-3, respectively, at a current density of 1 A g-1 in aqueous electrolyte, showing no apparent degradation to the specific capacitance at a current density of 10 A g-1 after 20000 cycles. The corresponding gravimetric and volumetric capacitances of 116.6 F g-1 and 78.1 cm-3 with an acceptable cyclic stability are also achieved in ionic liquid electrolyte. The results show a feasible strategy of designing dense hierarchical graphene based aerogels for supercapacitors.

  3. Texture-Aware Dense Image Matching Using Ternary Census Transform

    NASA Astrophysics Data System (ADS)

    Hu, Han; Chen, Chongtai; Wu, Bo; Yang, Xiaoxia; Zhu, Qing; Ding, Yulin

    2016-06-01

    Textureless and geometric discontinuities are major problems in state-of-the-art dense image matching methods, as they can cause visually significant noise and the loss of sharp features. Binary census transform is one of the best matching cost methods but in textureless areas, where the intensity values are similar, it suffers from small random noises. Global optimization for disparity computation is inherently sensitive to parameter tuning in complex urban scenes, and must compromise between smoothness and discontinuities. The aim of this study is to provide a method to overcome these issues in dense image matching, by extending the industry proven Semi-Global Matching through 1) developing a ternary census transform, which takes three outputs in a single order comparison and encodes the results in two bits rather than one, and also 2) by using texture-information to self-tune the parameters, which both preserves sharp edges and enforces smoothness when necessary. Experimental results using various datasets from different platforms have shown that the visual qualities of the triangulated point clouds in urban areas can be largely improved by these proposed methods.

  4. Monte Carlo simulations of ionization potential depression in dense plasmas

    NASA Astrophysics Data System (ADS)

    Stransky, M.

    2016-01-01

    A particle-particle grand canonical Monte Carlo model with Coulomb pair potential interaction was used to simulate modification of ionization potentials by electrostatic microfields. The Barnes-Hut tree algorithm [J. Barnes and P. Hut, Nature 324, 446 (1986)] was used to speed up calculations of electric potential. Atomic levels were approximated to be independent of the microfields as was assumed in the original paper by Ecker and Kröll [Phys. Fluids 6, 62 (1963)]; however, the available levels were limited by the corresponding mean inter-particle distance. The code was tested on hydrogen and dense aluminum plasmas. The amount of depression was up to 50% higher in the Debye-Hückel regime for hydrogen plasmas, in the high density limit, reasonable agreement was found with the Ecker-Kröll model for hydrogen plasmas and with the Stewart-Pyatt model [J. Stewart and K. Pyatt, Jr., Astrophys. J. 144, 1203 (1966)] for aluminum plasmas. Our 3D code is an improvement over the spherically symmetric simplifications of the Ecker-Kröll and Stewart-Pyatt models and is also not limited to high atomic numbers as is the underlying Thomas-Fermi model used in the Stewart-Pyatt model.

  5. Applications of Coding in Network Communications

    ERIC Educational Resources Information Center

    Chang, Christopher SungWook

    2012-01-01

    This thesis uses the tool of network coding to investigate fast peer-to-peer file distribution, anonymous communication, robust network construction under uncertainty, and prioritized transmission. In a peer-to-peer file distribution system, we use a linear optimization approach to show that the network coding framework significantly simplifies…

  6. Fully kinetic simulations of megajoule-scale dense plasma focus

    SciTech Connect

    Schmidt, A.; Link, A.; Tang, V.; Halvorson, C.; May, M.; Welch, D.; Meehan, B. T.; Hagen, E. C.

    2014-10-15

    Dense plasma focus (DPF) Z-pinch devices are sources of copious high energy electrons and ions, x-rays, and neutrons. Megajoule-scale DPFs can generate 10{sup 12} neutrons per pulse in deuterium gas through a combination of thermonuclear and beam-target fusion. However, the details of the neutron production are not fully understood and past optimization efforts of these devices have been largely empirical. Previously, we reported on the first fully kinetic simulations of a kilojoule-scale DPF and demonstrated that both kinetic ions and kinetic electrons are needed to reproduce experimentally observed features, such as charged-particle beam formation and anomalous resistivity. Here, we present the first fully kinetic simulation of a MegaJoule DPF, with predicted ion and neutron spectra, neutron anisotropy, neutron spot size, and time history of neutron production. The total yield predicted by the simulation is in agreement with measured values, validating the kinetic model in a second energy regime.

  7. Fully kinetic simulations of megajoule-scale dense plasma focus

    NASA Astrophysics Data System (ADS)

    Schmidt, A.; Link, A.; Welch, D.; Meehan, B. T.; Tang, V.; Halvorson, C.; May, M.; Hagen, E. C.

    2014-10-01

    Dense plasma focus (DPF) Z-pinch devices are sources of copious high energy electrons and ions, x-rays, and neutrons. Megajoule-scale DPFs can generate 1012 neutrons per pulse in deuterium gas through a combination of thermonuclear and beam-target fusion. However, the details of the neutron production are not fully understood and past optimization efforts of these devices have been largely empirical. Previously, we reported on the first fully kinetic simulations of a kilojoule-scale DPF and demonstrated that both kinetic ions and kinetic electrons are needed to reproduce experimentally observed features, such as charged-particle beam formation and anomalous resistivity. Here, we present the first fully kinetic simulation of a MegaJoule DPF, with predicted ion and neutron spectra, neutron anisotropy, neutron spot size, and time history of neutron production. The total yield predicted by the simulation is in agreement with measured values, validating the kinetic model in a second energy regime.

  8. PHOTOCHEMICAL HEATING OF DENSE MOLECULAR GAS

    SciTech Connect

    Glassgold, A. E.; Najita, J. R.

    2015-09-10

    Photochemical heating is analyzed with an emphasis on the heating generated by chemical reactions initiated by the products of photodissociation and photoionization. The immediate products are slowed down by collisions with the ambient gas and then heat the gas. In addition to this direct process, heating is also produced by the subsequent chemical reactions initiated by these products. Some of this chemical heating comes from the kinetic energy of the reaction products and the rest from collisional de-excitation of the product atoms and molecules. In considering dense gas dominated by molecular hydrogen, we find that the chemical heating is sometimes as large, if not much larger than, the direct heating. In very dense gas, the total photochemical heating approaches 10 eV per photodissociation (or photoionization), competitive with other ways of heating molecular gas.

  9. Active fluidization in dense glassy systems.

    PubMed

    Mandal, Rituparno; Bhuyan, Pranab Jyoti; Rao, Madan; Dasgupta, Chandan

    2016-07-20

    Dense soft glasses show strong collective caging behavior at sufficiently low temperatures. Using molecular dynamics simulations of a model glass former, we show that the incorporation of activity or self-propulsion, f0, can induce cage breaking and fluidization, resulting in the disappearance of the glassy phase beyond a critical f0. The diffusion coefficient crosses over from being strongly to weakly temperature dependent as f0 is increased. In addition, we demonstrate that activity induces a crossover from a fragile to a strong glass and a tendency of active particles to cluster. Our results are of direct relevance to the collective dynamics of dense active colloidal glasses and to recent experiments on tagged particle diffusion in living cells. PMID:27380935

  10. Dense Deposit Disease and C3 Glomerulopathy

    PubMed Central

    Barbour, Thomas D.; Pickering, Matthew C.; Terence Cook, H.

    2013-01-01

    Summary C3 glomerulopathy refers to those renal lesions characterized histologically by predominant C3 accumulation within the glomerulus, and pathogenetically by aberrant regulation of the alternative pathway of complement. Dense deposit disease is distinguished from other forms of C3 glomerulopathy by its characteristic appearance on electron microscopy. The extent to which dense deposit disease also differs from other forms of C3 glomerulopathy in terms of clinical features, natural history, and outcomes of treatment including renal transplantation is less clear. We discuss the pathophysiology of C3 glomerulopathy, with evidence for alternative pathway dysregulation obtained from affected individuals and complement factor H (Cfh)-deficient animal models. Recent linkage studies in familial C3 glomerulopathy have shown genomic rearrangements in the Cfh-related genes, for which the novel pathophysiologic concept of Cfh deregulation has been proposed. PMID:24161036

  11. Dense silica coatings on ceramic powder particles

    SciTech Connect

    Opitz, J.F.A.; Mayr, W.

    1995-09-01

    Dense silica coatings on the surface of inorganic powders particles are prepared by the hydrolysis of tetraethoxysilane (TEOS) in alcoholic suspensions. In a first reaction step, the TEOS is pre-hydrolysed in acidic solution and afterwards, a suspension of powder particles in this reaction solution is treated with ammonia which results in a dense silica coating of typically 10 - 100 nm thickness. Different luminescent powders which are used in the manufacture of cathode-ray tubes or fluorescent lamps have been coated by this procedure. The silica coating forms a transparent layer and the suspension properties of the coated powders are determined by the silica layer. The silica coating also protects sulfidic luminescent powders from being attacked by oxidizing agents like dichromate ions which are used in the suspension formulations for TV tube fabrication.

  12. The kinetic chemistry of dense interstellar clouds

    NASA Technical Reports Server (NTRS)

    Graedel, T. E.; Langer, W. D.; Frerking, M. A.

    1982-01-01

    A model of the time-dependent chemistry of dense interstellar clouds is formulated to study the dominant chemical processes in carbon and oxygen isotope fractionation, the formation of nitrogen-containing molecules, and the evolution of product molecules as a function of cloud density and temperature. The abundances of the dominant isotopes of the carbon- and oxygen-bearing molecules are calculated. The chemical abundances are found to be quite sensitive to electron concentration since the electron concentration determines the ratio of H3(+) to He(+), and the electron density is strongly influenced by the metals abundance. For typical metal abundances and for H2 cloud density not less than 10,000 molecules/cu cm, nearly all carbon exists as CO at late cloud ages. At high cloud density, many aspects of the chemistry are strongly time dependent. Finally, model calculations agree well with abundances deduced from observations of molecular line emission in cold dense clouds.

  13. Hydrodynamic stellar interactions in dense star clusters

    NASA Technical Reports Server (NTRS)

    Rasio, Frederic A.

    1993-01-01

    Highly detailed HST observations of globular-cluster cores and galactic nuclei motivate new theoretical studies of the violent dynamical processes which govern the evolution of these very dense stellar systems. These processes include close stellar encounters and direct physical collisions between stars. Such hydrodynamic stellar interactions are thought to explain the large populations of blue stragglers, millisecond pulsars, X-ray binaries, and other peculiar sources observed in globular clusters. Three-dimensional hydrodynamics techniques now make it possible to perform realistic numerical simulations of these interactions. The results, when combined with those of N-body simulations of stellar dynamics, should provide for the first time a realistic description of dense star clusters. Here I review briefly current theoretical work on hydrodynamic stellar interactions, emphasizing its relevance to recent observations.

  14. Impacts by Compact Ultra Dense Objects

    NASA Astrophysics Data System (ADS)

    Birrell, Jeremey; Labun, Lance; Rafelski, Johann

    2012-03-01

    We propose to search for nuclear density or greater compact ultra dense objects (CUDOs), which could constitute a significant fraction of the dark matter [1]. Considering their high density, the gravitational tidal forces are significant and atomic-density matter cannot stop an impacting CUDO, which punctures the surface of the target body, pulverizing, heating and entraining material near its trajectory through the target [2]. Because impact features endure over geologic timescales, the Earth, Moon, Mars, Mercury and large asteroids are well-suited to act as time-integrating CUDO detectors. There are several potential candidates for CUDO structure such as strangelet fragments or more generally dark matter if mechanisms exist for it to form compact objects. [4pt] [1] B. J. Carr, K. Kohri, Y. Sendouda, & J.'i. Yokoyama, Phys. Rev. D81, 104019 (2010). [0pt] [2] L. Labun, J. Birrell, J. Rafelski, Solar System Signatures of Impacts by Compact Ultra Dense Objects, arXiv:1104.4572.

  15. Observations of Plasmons in Warm Dense Matter

    SciTech Connect

    Glenzer, S H; Landen, O L; Neumayer, P; Lee, R W; Widmann, K; Pollaine, S W; Wallace, R J; Gregori, G; Holl, A; Bornath, T; Thiele, R; Schwarz, V; Kraeft, W; Redmer, R

    2006-09-05

    We present the first collective x-ray scattering measurements of plasmons in solid-density plasmas. The forward scattering spectra of a laser-produced narrow-band x-ray line from isochorically heated beryllium show that the plasmon frequency is a sensitive measure of the electron density. Dynamic structure calculations that include collisions and detailed balance match the measured plasmon spectrum indicating that this technique will enable new applications to determine the equation of state and compressibility of dense matter.

  16. Oxygen ion-conducting dense ceramic

    DOEpatents

    Balachandran, Uthamalingam; Kleefisch, Mark S.; Kobylinski, Thaddeus P.; Morissette, Sherry L.; Pei, Shiyou

    1998-01-01

    Preparation, structure, and properties of mixed metal oxide compositions and their uses are described. Mixed metal oxide compositions of the invention have stratified crystalline structure identifiable by means of powder X-ray diffraction patterns. In the form of dense ceramic membranes, the present compositions demonstrate an ability to separate oxygen selectively from a gaseous mixture containing oxygen and one or more other volatile components by means of ionic conductivities.

  17. Particle sorting in dense granular flows

    NASA Astrophysics Data System (ADS)

    Hill, K. M.; Fan, Y.; Yohannes, B.

    2008-12-01

    Mixtures of particles tend to unmix by particle property. One of the most dramatically destructive examples of this occurs in debris flow: boulders, rocks, and mud tumble down a hillside, and the largest rocks migrate toward the top and then the front of the flow where they do the most damage. Rotating drums and chute flows are two of the most common apparatuses used to systematically study segregation in dense, gravity driven granular flows. In these cases, smaller or, alternatively, denser particles segregate away from the free surface, phenomena that have been modeled using mechanisms such as kinetic sieving and buoyancy, respectively. Other segregation mechanisms have been identified in suspensions and in more energetic systems such as a gradient in granular temperature -- the kinetic energy of velocity fluctuations -- and curvature effects. However, with most experimental systems the dominant segregation mechanism is difficult to ascertain. In typical experimental systems designed to study segregation in dense granular flow (such as chutes and rotated drums), gravity, velocity gradients and porosity gradients coexist in the direction of segregation. We study the segregation of mixtures of particles numerically and experimentally in a split-bottom cell and in a rotating drum to isolate three possible driving mechanisms for segregation of densely-sheared granular mixtures: gravity, porosity, and velocity gradients and their associated dynamics. We find gravity alone does not drive segregation associated with particle size without a sufficiently large porosity or porosity gradient. A velocity gradient, however, appears capable of driving segregation associated both with particle size and material density in dense flows. We present our results and discuss the implications for some particle segregation behaviors observed in natural systems such as debris flows and sediment transport.

  18. Structures for dense, crack free thin films

    DOEpatents

    Jacobson, Craig P.; Visco, Steven J.; De Jonghe, Lutgard C.

    2011-03-08

    The process described herein provides a simple and cost effective method for making crack free, high density thin ceramic film. The steps involve depositing a layer of a ceramic material on a porous or dense substrate. The deposited layer is compacted and then the resultant laminate is sintered to achieve a higher density than would have been possible without the pre-firing compaction step.

  19. Dense Molecular Gas in Centaurus A

    NASA Astrophysics Data System (ADS)

    Wild, Wolfgang; Eckart, Andreas

    1999-10-01

    Centaurus A (NGC 5128) is the closest radio galaxy, and its molecular interstellar medium has been studied extensively in recent years. However, these studies used mostly molecular lines tracing low to medium density gas (see e.g. Eckart et al. 1990. Wild et al. 1997). The amount and distribution of the dense component remained largely unknown. We present spectra of the HCN(1-0) emission - which traces dense (n(H2) > 104 cm-3) molecular gas - at the center and along the prominent dust lane at offset positions +/- 60" and +/- 100", as well as single CS(2-1) and CS(3-2) spectra, observed with the SEST on La Silla, Chile. At the central position, the integrated intensity ratio I(HCN)/I(CO) peaks at 0.064, and decreases to somewhat equal to 0.02 to 0.04 in the dust lane. Based on the line luminosity ratio L(HCN)/L(CO) we estimate that there is a significant amount of dense gas in Centaurus A. The fraction of dense molecular gas as well as the star formation efficiency LFIR/LCO towards the center of Cen A is comparable to ultra-luminous infrared galaxies, and falls in between the values for ULIRGs and normal galaxies for positions in the dust lane. Details will be published in Wild & Eckart (A&A, in prep.). Eckart et al. 1990, ApJ 363, 451 Rydbeck et al. 1993, Astr.Ap. (Letters) 270, L13. Wild, W., Eckart, A. & Wiklind, T. 1997, Astr.Ap. 322, 419.

  20. Shear dispersion in dense granular flows

    SciTech Connect

    Christov, Ivan C.; Stone, Howard A.

    2014-04-18

    We formulate and solve a model problem of dispersion of dense granular materials in rapid shear flow down an incline. The effective dispersivity of the depth-averaged concentration of the dispersing powder is shown to vary as the Péclet number squared, as in classical Taylor–Aris dispersion of molecular solutes. An extension to generic shear profiles is presented, and possible applications to industrial and geological granular flows are noted.

  1. Hybrid-Based Dense Stereo Matching

    NASA Astrophysics Data System (ADS)

    Chuang, T. Y.; Ting, H. W.; Jaw, J. J.

    2016-06-01

    Stereo matching generating accurate and dense disparity maps is an indispensable technique for 3D exploitation of imagery in the fields of Computer vision and Photogrammetry. Although numerous solutions and advances have been proposed in the literature, occlusions, disparity discontinuities, sparse texture, image distortion, and illumination changes still lead to problematic issues and await better treatment. In this paper, a hybrid-based method based on semi-global matching is presented to tackle the challenges on dense stereo matching. To ease the sensitiveness of SGM cost aggregation towards penalty parameters, a formal way to provide proper penalty estimates is proposed. To this end, the study manipulates a shape-adaptive cross-based matching with an edge constraint to generate an initial disparity map for penalty estimation. Image edges, indicating the potential locations of occlusions as well as disparity discontinuities, are approved by the edge drawing algorithm to ensure the local support regions not to cover significant disparity changes. Besides, an additional penalty parameter 𝑃𝑒 is imposed onto the energy function of SGM cost aggregation to specifically handle edge pixels. Furthermore, the final disparities of edge pixels are found by weighting both values derived from the SGM cost aggregation and the U-SURF matching, providing more reliable estimates at disparity discontinuity areas. Evaluations on Middlebury stereo benchmarks demonstrate satisfactory performance and reveal the potency of the hybrid-based dense stereo matching method.

  2. Dense spray evaporation as a mixing process

    NASA Astrophysics Data System (ADS)

    de Rivas, A.; Villermaux, E.

    2016-05-01

    We explore the processes by which a dense set of small liquid droplets (a spray) evaporates in a dry, stirred gas phase. A dense spray of micron-sized liquid (water or ethanol) droplets is formed in air by a pneumatic atomizer in a closed chamber. The spray is conveyed in ambient air as a plume whose extension depends on the relative humidity of the diluting medium. Standard shear instabilities develop at the plume edge, forming the stretched lamellar structures familiar with passive scalars. Unlike passive scalars however, these lamellae vanish in a finite time, because individual droplets evaporate at their border in contact with the dry environment. Experiments demonstrate that the lifetime of an individual droplet embedded in a lamellae is much larger than expected from the usual d2 law describing the fate of a single drop evaporating in a quiescent environment. By analogy with the way mixing times are understood from the convection-diffusion equation for passive scalars, we show that the lifetime of a spray lamellae stretched at a constant rate γ is tv=1/γ ln(1/+ϕ ϕ ) , where ϕ is a parameter that incorporates the thermodynamic and diffusional properties of the vapor in the diluting phase. The case of time-dependent stretching rates is examined too. A dense spray behaves almost as a (nonconserved) passive scalar.

  3. Confined magnetic monopoles in dense QCD

    SciTech Connect

    Gorsky, A.; Shifman, M.; Yung, A.

    2011-04-15

    Non-Abelian strings exist in the color-flavor locked phase of dense QCD. We show that kinks appearing in the world-sheet theory on these strings, in the form of the kink-antikink bound pairs, are the magnetic monopoles-descendants of the 't Hooft-Polyakov monopoles surviving in such a special form in dense QCD. Our consideration is heavily based on analogies and inspiration coming from certain supersymmetric non-Abelian theories. This is the first ever analytic demonstration that objects unambiguously identifiable as the magnetic monopoles are native to non-Abelian Yang-Mills theories (albeit our analysis extends only to the phase of the monopole confinement and has nothing to say about their condensation). Technically, our demonstration becomes possible due to the fact that low-energy dynamics of the non-Abelian strings in dense QCD is that of the orientational zero modes. It is described by an effective two-dimensional CP(2) model on the string world sheet. The kinks in this model representing confined magnetic monopoles are in a highly quantum regime.

  4. Multishock Compression Properties of Warm Dense Argon

    NASA Astrophysics Data System (ADS)

    Zheng, Jun; Chen, Qifeng; Yunjun, Gu; Li, Zhiguo; Shen, Zhijun

    2015-10-01

    Warm dense argon was generated by a shock reverberation technique. The diagnostics of warm dense argon were performed by a multichannel optical pyrometer and a velocity interferometer system. The equations of state in the pressure-density range of 20-150 GPa and 1.9-5.3 g/cm3 from the first- to fourth-shock compression were presented. The single-shock temperatures in the range of 17.2-23.4 kK were obtained from the spectral radiance. Experimental results indicates that multiple shock-compression ratio (ηi = ρi/ρ0) is greatly enhanced from 3.3 to 8.8, where ρ0 is the initial density of argon and ρi (i = 1, 2, 3, 4) is the compressed density from first to fourth shock, respectively. For the relative compression ratio (ηi’ = ρi/ρi-1), an interesting finding is that a turning point occurs at the second shocked states under the conditions of different experiments, and ηi’ increases with pressure in lower density regime and reversely decreases with pressure in higher density regime. The evolution of the compression ratio is controlled by the excitation of internal degrees of freedom, which increase the compression, and by the interaction effects between particles that reduce it. A temperature-density plot shows that current multishock compression states of argon have distributed into warm dense regime.

  5. Multishock Compression Properties of Warm Dense Argon.

    PubMed

    Zheng, Jun; Chen, Qifeng; Yunjun, Gu; Li, Zhiguo; Shen, Zhijun

    2015-01-01

    Warm dense argon was generated by a shock reverberation technique. The diagnostics of warm dense argon were performed by a multichannel optical pyrometer and a velocity interferometer system. The equations of state in the pressure-density range of 20-150 GPa and 1.9-5.3 g/cm(3) from the first- to fourth-shock compression were presented. The single-shock temperatures in the range of 17.2-23.4 kK were obtained from the spectral radiance. Experimental results indicates that multiple shock-compression ratio (ηi = ρi/ρ0) is greatly enhanced from 3.3 to 8.8, where ρ0 is the initial density of argon and ρi (i = 1, 2, 3, 4) is the compressed density from first to fourth shock, respectively. For the relative compression ratio (ηi' = ρi/ρi-1), an interesting finding is that a turning point occurs at the second shocked states under the conditions of different experiments, and ηi' increases with pressure in lower density regime and reversely decreases with pressure in higher density regime. The evolution of the compression ratio is controlled by the excitation of internal degrees of freedom, which increase the compression, and by the interaction effects between particles that reduce it. A temperature-density plot shows that current multishock compression states of argon have distributed into warm dense regime. PMID:26515505

  6. Comparative sequence analysis of a gene-dense region among closely related species of Drosophila melanogaster.

    PubMed

    Kawahara, Yoshihiro; Matsuo, Takashi; Nozawa, Masafumi; Shin-I, Tadasu; Kohara, Yuji; Aigaki, Toshiro

    2004-12-01

    Comparative sequence analysis among closely related species is essential for investigating the evolution of non-coding sequences, which evolve more rapidly than protein-coding sequences. We sequenced the cytogenetic map 56F10-16, a gene-dense region of D. simulans and D. sechellia, closely related species to D. melanogaster. About 57 kb of the genomic sequences containing 19 genes were annotated from each species according to the corresponding region of the D. melanogaster genome. The order and orientation of genes were perfectly conserved among the three species, and no transposable elements were found. The rate of nucleotide substitutions in the non-coding sequences was lower than that at the fourfold-degenerate sites, implying functional constraints in the non-coding regions. The sequence information from three closely related species, allowed us to estimate the insertions and the deletions that may have occurred in the lineages of D. simulans and D. sechellia using the D. melanogaster sequence as an outgroup. The number of deletions was twice that of insertions for the introns of D. simulans. More remarkably, the deletion outnumbered insertions by 7.5 times for the intergenic sequences of D. sechellia. These results suggest that the non-coding sequences have been shortened by deletion biases. However, the deletion bias was lower than that previously estimated for pseudogenes, suggesting that the non-coding sequences are already rich in functional elements, possibly involved in the regulation of gene expression including transcription and pre-mRNA processing. These features of non-coding sequences may be common to other gene-dense regions contributing to the compactness of the Drosophila genome.

  7. Revisiting the Physico-Chemical Hypothesis of Code Origin: An Analysis Based on Code-Sequence Coevolution in a Finite Population

    NASA Astrophysics Data System (ADS)

    Bandhu, Ashutosh Vishwa; Aggarwal, Neha; Sengupta, Supratim

    2013-12-01

    The origin of the genetic code marked a major transition from a plausible RNA world to the world of DNA and proteins and is an important milestone in our understanding of the origin of life. We examine the efficacy of the physico-chemical hypothesis of code origin by carrying out simulations of code-sequence coevolution in finite populations in stages, leading first to the emergence of ten amino acid code(s) and subsequently to 14 amino acid code(s). We explore two different scenarios of primordial code evolution. In one scenario, competition occurs between populations of equilibrated code-sequence sets while in another scenario; new codes compete with existing codes as they are gradually introduced into the population with a finite probability. In either case, we find that natural selection between competing codes distinguished by differences in the degree of physico-chemical optimization is unable to explain the structure of the standard genetic code. The code whose structure is most consistent with the standard genetic code is often not among the codes that have a high fixation probability. However, we find that the composition of the code population affects the code fixation probability. A physico-chemically optimized code gets fixed with a significantly higher probability if it competes against a set of randomly generated codes. Our results suggest that physico-chemical optimization may not be the sole driving force in ensuring the emergence of the standard genetic code.

  8. Coding of Neuroinfectious Diseases.

    PubMed

    Barkley, Gregory L

    2015-12-01

    Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue. PMID:26633789

  9. Model Children's Code.

    ERIC Educational Resources Information Center

    New Mexico Univ., Albuquerque. American Indian Law Center.

    The Model Children's Code was developed to provide a legally correct model code that American Indian tribes can use to enact children's codes that fulfill their legal, cultural and economic needs. Code sections cover the court system, jurisdiction, juvenile offender procedures, minor-in-need-of-care, and termination. Almost every Code section is…

  10. On Coding Non-Contiguous Letter Combinations

    PubMed Central

    Dandurand, Frédéric; Grainger, Jonathan; Duñabeitia, Jon Andoni; Granier, Jean-Pierre

    2011-01-01

    Starting from the hypothesis that printed word identification initially involves the parallel mapping of visual features onto location-specific letter identities, we analyze the type of information that would be involved in optimally mapping this location-specific orthographic code onto a location-invariant lexical code. We assume that some intermediate level of coding exists between individual letters and whole words, and that this involves the representation of letter combinations. We then investigate the nature of this intermediate level of coding given the constraints of optimality. This intermediate level of coding is expected to compress data while retaining as much information as possible about word identity. Information conveyed by letters is a function of how much they constrain word identity and how visible they are. Optimization of this coding is a combination of minimizing resources (using the most compact representations) and maximizing information. We show that in a large proportion of cases, non-contiguous letter sequences contain more information than contiguous sequences, while at the same time requiring less precise coding. Moreover, we found that the best predictor of human performance in orthographic priming experiments was within-word ranking of conditional probabilities, rather than average conditional probabilities. We conclude that from an optimality perspective, readers learn to select certain contiguous and non-contiguous letter combinations as information that provides the best cue to word identity. PMID:21734901

  11. To Code or Not To Code?

    ERIC Educational Resources Information Center

    Parkinson, Brian; Sandhu, Parveen; Lacorte, Manel; Gourlay, Lesley

    1998-01-01

    This article considers arguments for and against the use of coding systems in classroom-based language research and touches on some relevant considerations from ethnographic and conversational analysis approaches. The four authors each explain and elaborate on their practical decision to code or not to code events or utterances at a specific point…

  12. Rapid Optimization Library

    2014-05-13

    ROL provides interfaces to and implementations of algorithms for gradient-based unconstrained and constrained optimization. ROL can be used to optimize the response of any client simulation code that evaluates scalar-valued response functions. If the client code can provide gradient information for the response function, ROL will take advantage of it, resulting in faster runtimes. ROL's interfaces are matrix-free, in other words ROL only uses evaluations of scalar-valued and vector-valued functions. ROL can be used tomore » solve optimal design problems and inverse problems based on a variety of simulation software.« less

  13. Bare Code Reader

    NASA Astrophysics Data System (ADS)

    Clair, Jean J.

    1980-05-01

    The Bare code system will be used, in every market and supermarket. The code, which is normalised in US and Europe (code EAN) gives informations on price, storage, nature and allows in real time the gestion of theshop.

  14. 3D polygonal representation of dense point clouds by triangulation, segmentation, and texture projection

    NASA Astrophysics Data System (ADS)

    Tajbakhsh, Touraj

    2010-02-01

    A basic concern of computer graphic is the modeling and realistic representation of three-dimensional objects. In this paper we present our reconstruction framework which determines a polygonal surface from a set of dense points such those typically obtained from laser scanners. We deploy the concept of adaptive blobs to achieve a first volumetric representation of the object. In the next step we estimate a coarse surface using the marching cubes method. We propose to deploy a depth-first search segmentation algorithm traversing a graph representation of the obtained polygonal mesh in order to identify all connected components. A so called supervised triangulation maps the coarse surfaces onto the dense point cloud. We optimize the mesh topology using edge exchange operations. For photo-realistic visualization of objects we finally synthesize optimal low-loss textures from available scene captures of different projections. We evaluate our framework on artificial data as well as real sensed data.

  15. Quantum molecular dynamics simulations of transport properties in liquid and dense-plasma plutonium

    SciTech Connect

    Kress, J. D.; Cohen, James S.; Kilcrease, D. P.; Horner, D. A.; Collins, L. A.

    2011-02-15

    We have calculated the viscosity and self-diffusion coefficients of plutonium in the liquid phase using quantum molecular dynamics (QMD) and in the dense-plasma phase using orbital-free molecular dynamics (OFMD), as well as in the intermediate warm dense matter regime with both methods. Our liquid metal results for viscosity are about 40% lower than measured experimentally, whereas a previous calculation using an empirical interatomic potential (modified embedded-atom method) obtained results 3-4 times larger than the experiment. The QMD and OFMD results agree well at the intermediate temperatures. The calculations in the dense-plasma regime for temperatures from 50 to 5000 eV and densities about 1-5 times ambient are compared with the one-component plasma (OCP) model, using effective charges given by the average-atom code inferno. The inferno-OCP model results agree with the OFMD to within about a factor of 2, except for the viscosity at temperatures less than about 100 eV, where the disagreement is greater. A Stokes-Einstein relationship of the viscosities and diffusion coefficients is found to hold fairly well separately in both the liquid and dense-plasma regimes.

  16. Quantum molecular dynamics simulations of transport properties in liquid and dense-plasma plutonium.

    PubMed

    Kress, J D; Cohen, James S; Kilcrease, D P; Horner, D A; Collins, L A

    2011-02-01

    We have calculated the viscosity and self-diffusion coefficients of plutonium in the liquid phase using quantum molecular dynamics (QMD) and in the dense-plasma phase using orbital-free molecular dynamics (OFMD), as well as in the intermediate warm dense matter regime with both methods. Our liquid metal results for viscosity are about 40% lower than measured experimentally, whereas a previous calculation using an empirical interatomic potential (modified embedded-atom method) obtained results 3-4 times larger than the experiment. The QMD and OFMD results agree well at the intermediate temperatures. The calculations in the dense-plasma regime for temperatures from 50 to 5000 eV and densities about 1-5 times ambient are compared with the one-component plasma (OCP) model, using effective charges given by the average-atom code INFERNO. The INFERNO-OCP model results agree with the OFMD to within about a factor of 2, except for the viscosity at temperatures less than about 100 eV, where the disagreement is greater. A Stokes-Einstein relationship of the viscosities and diffusion coefficients is found to hold fairly well separately in both the liquid and dense-plasma regimes.

  17. Multishock Compression Properties of Warm Dense Argon

    PubMed Central

    Zheng, Jun; Chen, Qifeng; Yunjun, Gu; Li, Zhiguo; Shen, Zhijun

    2015-01-01

    Warm dense argon was generated by a shock reverberation technique. The diagnostics of warm dense argon were performed by a multichannel optical pyrometer and a velocity interferometer system. The equations of state in the pressure-density range of 20–150 GPa and 1.9–5.3 g/cm3 from the first- to fourth-shock compression were presented. The single-shock temperatures in the range of 17.2–23.4 kK were obtained from the spectral radiance. Experimental results indicates that multiple shock-compression ratio (ηi = ρi/ρ0) is greatly enhanced from 3.3 to 8.8, where ρ0 is the initial density of argon and ρi (i = 1, 2, 3, 4) is the compressed density from first to fourth shock, respectively. For the relative compression ratio (ηi’ = ρi/ρi-1), an interesting finding is that a turning point occurs at the second shocked states under the conditions of different experiments, and ηi’ increases with pressure in lower density regime and reversely decreases with pressure in higher density regime. The evolution of the compression ratio is controlled by the excitation of internal degrees of freedom, which increase the compression, and by the interaction effects between particles that reduce it. A temperature-density plot shows that current multishock compression states of argon have distributed into warm dense regime. PMID:26515505

  18. Dense Subgraph Partition of Positive Hypergraphs.

    PubMed

    Liu, Hairong; Latecki, Longin Jan; Yan, Shuicheng

    2015-03-01

    In this paper, we present a novel partition framework, called dense subgraph partition (DSP), to automatically, precisely and efficiently decompose a positive hypergraph into dense subgraphs. A positive hypergraph is a graph or hypergraph whose edges, except self-loops, have positive weights. We first define the concepts of core subgraph, conditional core subgraph, and disjoint partition of a conditional core subgraph, then define DSP based on them. The result of DSP is an ordered list of dense subgraphs with decreasing densities, which uncovers all underlying clusters, as well as outliers. A divide-and-conquer algorithm, called min-partition evolution, is proposed to efficiently compute the partition. DSP has many appealing properties. First, it is a nonparametric partition and it reveals all meaningful clusters in a bottom-up way. Second, it has an exact and efficient solution, called min-partition evolution algorithm. The min-partition evolution algorithm is a divide-and-conquer algorithm, thus time-efficient and memory-friendly, and suitable for parallel processing. Third, it is a unified partition framework for a broad range of graphs and hypergraphs. We also establish its relationship with the densest k-subgraph problem (DkS), an NP-hard but fundamental problem in graph theory, and prove that DSP gives precise solutions to DkS for all kin a graph-dependent set, called critical k-set. To our best knowledge, this is a strong result which has not been reported before. Moreover, as our experimental results show, for sparse graphs, especially web graphs, the size of critical k-set is close to the number of vertices in the graph. We test the proposed partition framework on various tasks, and the experimental results clearly illustrate its advantages.

  19. Temperature relaxation in dense plasma mixtures

    NASA Astrophysics Data System (ADS)

    Faussurier, Gérald; Blancard, Christophe

    2016-09-01

    We present a model to calculate temperature-relaxation rates in dense plasma mixtures. The electron-ion relaxation rates are calculated using an average-atom model and the ion-ion relaxation rates by the Landau-Spitzer approach. This method allows the study of the temperature relaxation in many-temperature electron-ion and ion-ion systems such as those encountered in inertial confinement fusion simulations. It is of interest for general nonequilibrium thermodynamics dealing with energy flows between various systems and should find broad use in present high energy density experiments.

  20. Resolving ultrafast heating of dense cryogenic hydrogen.

    PubMed

    Zastrau, U; Sperling, P; Harmand, M; Becker, A; Bornath, T; Bredow, R; Dziarzhytski, S; Fennel, T; Fletcher, L B; Förster, E; Göde, S; Gregori, G; Hilbert, V; Hochhaus, D; Holst, B; Laarmann, T; Lee, H J; Ma, T; Mithen, J P; Mitzner, R; Murphy, C D; Nakatsutsumi, M; Neumayer, P; Przystawik, A; Roling, S; Schulz, M; Siemer, B; Skruszewicz, S; Tiggesbäumker, J; Toleikis, S; Tschentscher, T; White, T; Wöstmann, M; Zacharias, H; Döppner, T; Glenzer, S H; Redmer, R

    2014-03-14

    We report on the dynamics of ultrafast heating in cryogenic hydrogen initiated by a ≲300  fs, 92 eV free electron laser x-ray burst. The rise of the x-ray scattering amplitude from a second x-ray pulse probes the transition from dense cryogenic molecular hydrogen to a nearly uncorrelated plasmalike structure, indicating an electron-ion equilibration time of ∼0.9  ps. The rise time agrees with radiation hydrodynamics simulations based on a conductivity model for partially ionized plasma that is validated by two-temperature density-functional theory.

  1. Electrical and thermal conductivities in dense plasmas

    SciTech Connect

    Faussurier, G. Blancard, C.; Combis, P.; Videau, L.

    2014-09-15

    Expressions for the electrical and thermal conductivities in dense plasmas are derived combining the Chester-Thellung-Kubo-Greenwood approach and the Kramers approximation. The infrared divergence is removed assuming a Drude-like behaviour. An analytical expression is obtained for the Lorenz number that interpolates between the cold solid-state and the hot plasma phases. An expression for the electrical resistivity is proposed using the Ziman-Evans formula, from which the thermal conductivity can be deduced using the analytical expression for the Lorenz number. The present method can be used to estimate electrical and thermal conductivities of mixtures. Comparisons with experiment and quantum molecular dynamics simulations are done.

  2. Phase boundary of hot dense fluid hydrogen

    PubMed Central

    Ohta, Kenji; Ichimaru, Kota; Einaga, Mari; Kawaguchi, Sho; Shimizu, Katsuya; Matsuoka, Takahiro; Hirao, Naohisa; Ohishi, Yasuo

    2015-01-01

    We investigated the phase transformation of hot dense fluid hydrogen using static high-pressure laser-heating experiments in a laser-heated diamond anvil cell. The results show anomalies in the heating efficiency that are likely to be attributed to the phase transition from a diatomic to monoatomic fluid hydrogen (plasma phase transition) in the pressure range between 82 and 106 GPa. This study imposes tighter constraints on the location of the hydrogen plasma phase transition boundary and suggests higher critical point than that predicted by the theoretical calculations. PMID:26548442

  3. Electrical Resistivity Measurements of Hot Dense Aluminum

    NASA Astrophysics Data System (ADS)

    Benage, J. F.; Shanahan, W. R.; Murillo, M. S.

    1999-10-01

    Electrical transport properties of dense aluminum are measured in the disordered liquidlike phase using a well-tamped, thermally equilibrated, exploding wire z pinch. Direct measurements of the electrical conductivity have been made using voltage and current measurements. Our measurements span the minimum conductivity regime, at higher densities than have been produced previously. We find that some Ziman-like theoretical predictions are in fair agreement with the data and one Ziman-like theoretical approach is in good agreement, in contrast to other experiments performed in similar regimes which indicate poor agreement with such theories.

  4. Dense optical-electrical interface module

    SciTech Connect

    Paul Chang

    2000-12-21

    The DOIM (Dense Optical-electrical Interface Modules) is a custom-designed optical data transmission module employed in the upgrade of Silicon Vertex Detector of CDF experiment at Fermilab. Each DOIM module consists of a transmitter (TX) converting electrical differential input signals to optical outputs, a middle segment of jacketed fiber ribbon cable, and a receiver (RX) which senses the light inputs and converts them back to electrical signals. The targeted operational frequency is 53 MHz, and higher rate is achievable. This article outlines the design goals, implementation methods, production test results, and radiation hardness tests of these modules.

  5. Simulink Code Generation: Tutorial for Generating C Code from Simulink Models using Simulink Coder

    NASA Technical Reports Server (NTRS)

    MolinaFraticelli, Jose Carlos

    2012-01-01

    This document explains all the necessary steps in order to generate optimized C code from Simulink Models. This document also covers some general information on good programming practices, selection of variable types, how to organize models and subsystems, and finally how to test the generated C code and compare it with data from MATLAB.

  6. Code Samples Used for Complexity and Control

    NASA Astrophysics Data System (ADS)

    Ivancevic, Vladimir G.; Reid, Darryn J.

    2015-11-01

    The following sections are included: * MathematicaⓇ Code * Generic Chaotic Simulator * Vector Differential Operators * NLS Explorer * 2C++ Code * C++ Lambda Functions for Real Calculus * Accelerometer Data Processor * Simple Predictor-Corrector Integrator * Solving the BVP with the Shooting Method * Linear Hyperbolic PDE Solver * Linear Elliptic PDE Solver * Method of Lines for a Set of the NLS Equations * C# Code * Iterative Equation Solver * Simulated Annealing: A Function Minimum * Simple Nonlinear Dynamics * Nonlinear Pendulum Simulator * Lagrangian Dynamics Simulator * Complex-Valued Crowd Attractor Dynamics * Freeform Fortran Code * Lorenz Attractor Simulator * Complex Lorenz Attractor * Simple SGE Soliton * Complex Signal Presentation * Gaussian Wave Packet * Hermitian Matrices * Euclidean L2-Norm * Vector/Matrix Operations * Plain C-Code: Levenberg-Marquardt Optimizer * Free Basic Code: 2D Crowd Dynamics with 3000 Agents

  7. Accumulate repeat accumulate codes

    NASA Technical Reports Server (NTRS)

    Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung

    2004-01-01

    In this paper we propose an innovative channel coding scheme called 'Accumulate Repeat Accumulate codes' (ARA). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes, thus belief propagation can be used for iterative decoding of ARA codes on a graph. The structure of encoder for this class can be viewed as precoded Repeat Accumulate (RA) code or as precoded Irregular Repeat Accumulate (IRA) code, where simply an accumulator is chosen as a precoder. Thus ARA codes have simple, and very fast encoder structure when they representing LDPC codes. Based on density evolution for LDPC codes through some examples for ARA codes, we show that for maximum variable node degree 5 a minimum bit SNR as low as 0.08 dB from channel capacity for rate 1/2 can be achieved as the block size goes to infinity. Thus based on fixed low maximum variable node degree, its threshold outperforms not only the RA and IRA codes but also the best known LDPC codes with the dame maximum node degree. Furthermore by puncturing the accumulators any desired high rate codes close to code rate 1 can be obtained with thresholds that stay close to the channel capacity thresholds uniformly. Iterative decoding simulation results are provided. The ARA codes also have projected graph or protograph representation that allows for high speed decoder implementation.

  8. Ion Acoustic Modes in Warm Dense Matter

    NASA Astrophysics Data System (ADS)

    Hartley, Nicholas; Monaco, Guilio; White, Thomas; Gregori, Gianluca; Graham, Peter; Fletcher, Luke; Appel, Karen; Tschentscher, Thomas; Lee, Hae Ja; Nagler, Bob; Galtier, Eric; Granados, Eduardo; Heimann, Philip; Zastrau, Ulf; Doeppner, Tilo; Gericke, Dirk; Lepape, Sebastien; Ma, Tammy; Pak, Art; Schropp, Andreas; Glenzer, Siegfried; Hastings, Jerry

    2015-06-01

    We present results that, for the first time, show scattering from ion acoustic modes in warm dense matter, representing an unprecedented level of energy resolution in the study of dense plasmas. The experiment was carried out at the LCLS facility in California on an aluminum sample at 7 g/cc and 5 eV. Using an X-ray probe at 8 keV, shifted peaks at +/-150 meV were observed. Although the energy shifts from interactions with the acoustic waves agree with predicted values from DFT-MD models, a central (elastic) peak was also observed, which did not appear in modelled spectra and may be due to the finite timescale of the simulation. Data fitting with a hydrodynamic form has proved able to match the observed spectrum, and provide measurements of some thermodynamic properties of the system, which mostly agree with predicted values. Suggest for further experiments to determine the cause of the disparity are also given.

  9. Nuclear quantum dynamics in dense hydrogen

    PubMed Central

    Kang, Dongdong; Sun, Huayang; Dai, Jiayu; Chen, Wenbo; Zhao, Zengxiu; Hou, Yong; Zeng, Jiaolong; Yuan, Jianmin

    2014-01-01

    Nuclear dynamics in dense hydrogen, which is determined by the key physics of large-angle scattering or many-body collisions between particles, is crucial for the dynamics of planet's evolution and hydrodynamical processes in inertial confinement confusion. Here, using improved ab initio path-integral molecular dynamics simulations, we investigated the nuclear quantum dynamics regarding transport behaviors of dense hydrogen up to the temperatures of 1 eV. With the inclusion of nuclear quantum effects (NQEs), the ionic diffusions are largely higher than the classical treatment by the magnitude from 20% to 146% as the temperature is decreased from 1 eV to 0.3 eV at 10 g/cm3, meanwhile, electrical and thermal conductivities are significantly lowered. In particular, the ionic diffusion is found much larger than that without NQEs even when both the ionic distributions are the same at 1 eV. The significant quantum delocalization of ions introduces remarkably different scattering cross section between protons compared with classical particle treatments, which explains the large difference of transport properties induced by NQEs. The Stokes-Einstein relation, Wiedemann-Franz law, and isotope effects are re-examined, showing different behaviors in nuclear quantum dynamics. PMID:24968754

  10. Dynamics of Kr in dense clathrate hydrates.

    SciTech Connect

    Klug, D. D.; Tse, J. S.; Zhao, J. Y.; Sturhahn, W.; Alp, E. E.; Tulk, C. A.

    2011-01-01

    The dynamics of Kr atoms as guests in dense clathrate hydrate structures are investigated using site specific {sup 83}Kr nuclear resonant inelastic x-ray scattering (NRIXS) spectroscopy in combination with molecular dynamics simulations. The dense structure H hydrate and filled-ice structures are studied at high pressures in a diamond anvil high-pressure cell. The dynamics of Kr in the structure H clathrate hydrate quench recovered at 77 K is also investigated. The Kr phonon density of states obtained from the experimental NRIXS data are compared with molecular dynamics simulations. The temperature and pressure dependence of the phonon spectra provide details of the Kr dynamics in the clathrate hydrate cages. Comparison with the dynamics of Kr atoms in the low-pressure structure II obtained previously was made. The Lamb-Mossbauer factor obtained from NRIXS experiments and molecular dynamics calculations are in excellent agreement and are shown to yield unique information on the strength and temperature dependence of guest-host interactions.

  11. Probing the Physical Structures of Dense Filaments

    NASA Astrophysics Data System (ADS)

    Li, Di

    2015-08-01

    Filament is a common feature in cosmological structures of various scales, ranging from dark matter cosmic web, galaxy clusters, inter-galactic gas flows, to Galactic ISM clouds. Even within cold dense molecular cores, filaments have been detected. Theories and simulations with (or without) different combination of physical principles, including gravity, thermal balance, turbulence, and magnetic field, can reproduce intriguing images of filaments. The ubiquity of filaments and the similarity in simulated ones make physical parameters, beyond dust column density, a necessity for understanding filament evolution. I report three projects attempting to measure physical parameters of filaments. We derive the volume density of a dense Taurus filament based on several cyanoacetylene transitions observed by GBT and ART. We measure the gas temperature of the OMC 2-3 filament based on combined GBT+VLA ammonia images. We also measured the sub-millimeter polarization vectors along OMC3. These filaments were found to be likely a cylinder-type structure, without dynamic heating, and likely accreting mass along the magnetic field lines.

  12. Solids flow rate measurement in dense slurries

    SciTech Connect

    Porges, K.G.; Doss, E.D.

    1993-09-01

    Accurate and rapid flow rate measurement of solids in dense slurries remains an unsolved technical problem, with important industrial applications in chemical processing plants and long-distance solids conveyance. In a hostile two-phase medium, such a measurement calls for two independent parameter determinations, both by non-intrusive means. Typically, dense slurries tend to flow in laminar, non-Newtonian mode, eliminating most conventional means that usually rely on calibration (which becomes more difficult and costly for high pressure and temperature media). These issues are reviewed, and specific solutions are recommended in this report. Detailed calculations that lead to improved measuring device designs are presented for both bulk density and average velocity measurements. Cross-correlation, chosen here for the latter task, has long been too inaccurate for practical applications. The cause and the cure of this deficiency are discussed using theory-supported modeling. Fluid Mechanics are used to develop the velocity profiles of laminar non-Newtonian flow in a rectangular duct. This geometry uniquely allows the design of highly accurate `capacitive` devices and also lends itself to gamma transmission densitometry on an absolute basis. An absolute readout, though of less accuracy, is also available from a capacitive densitometer and a pair of capacitive sensors yields signals suitable for cross-correlation velocity measurement.

  13. Quantum molecular dynamics simulations of dense matter

    SciTech Connect

    Collins, L.; Kress, J.; Troullier, N.; Lenosky, T.; Kwon, I.

    1997-12-31

    The authors have developed a quantum molecular dynamics (QMD) simulation method for investigating the properties of dense matter in a variety of environments. The technique treats a periodically-replicated reference cell containing N atoms in which the nuclei move according to the classical equations-of-motion. The interatomic forces are generated from the quantum mechanical interactions of the (between?) electrons and nuclei. To generate these forces, the authors employ several methods of varying sophistication from the tight-binding (TB) to elaborate density functional (DF) schemes. In the latter case, lengthy simulations on the order of 200 atoms are routinely performed, while for the TB, which requires no self-consistency, upwards to 1000 atoms are systematically treated. The QMD method has been applied to a variety cases: (1) fluid/plasma Hydrogen from liquid density to 20 times volume-compressed for temperatures of a thousand to a million degrees Kelvin; (2) isotopic hydrogenic mixtures, (3) liquid metals (Li, Na, K); (4) impurities such as Argon in dense hydrogen plasmas; and (5) metal/insulator transitions in rare gas systems (Ar,Kr) under high compressions. The advent of parallel versions of the methods, especially for fast eigensolvers, presage LDA simulations in the range of 500--1000 atoms and TB runs for tens of thousands of particles. This leap should allow treatment of shock chemistry as well as large-scale mixtures of species in highly transient environments.

  14. ALEGRA-HEDP simulations of the dense plasma focus.

    SciTech Connect

    Flicker, Dawn G.; Kueny, Christopher S.; Rose, David V.

    2009-09-01

    We have carried out 2D simulations of three dense plasma focus (DPF) devices using the ALEGRA-HEDP code and validated the results against experiments. The three devices included two Mather-type machines described by Bernard et. al. and the Tallboy device currently in operation at NSTec in North Las Vegas. We present simulation results and compare to detailed plasma measurements for one Bernard device and to current and neutron yields for all three. We also describe a new ALEGRA capability to import data from particle-in-cell calculations of initial gas breakdown, which will allow the first ever simulations of DPF operation from the beginning of the voltage discharge to the pinch phase for arbitrary operating conditions and without assumptions about the early sheath structure. The next step in understanding DPF pinch physics must be three-dimensional modeling of conditions going into the pinch, and we have just launched our first 3D simulation of the best-diagnosed Bernard device.

  15. Massive Star Formation: Characterising Infall and Outflow in dense cores.

    NASA Astrophysics Data System (ADS)

    Akhter, Shaila; Cunningham, Maria; Harvey-Smith, Lisa; Jones, Paul Andrew; Purcell, Cormac; Walsh, Andrew John

    2015-08-01

    Massive stars are some of the most important objects in the Universe, shaping the evolution of galaxies, creating chemical elements, and hence shaping the evolution of the Universe. However, the processes by which they form, and how they shape their environment during their birth processes, are not well understood. We are using NH3 data from the "The H2O Southern Galactic Plane Survey" (HOPS) to define the positions of dense cores/clumps of gas in the southern Galactic plane that are likely to form stars. Due to its effective critical density, NH3 can detect massive star forming regions effectively compared to other tracers. We did a comparative study with different methods for finding clumps and found Fellwalker as the best. We found ~ 10% of the star forming clumps with multiple components and ~ 90% clumps with single component along the line of sight. Then, using data from the "The Millimetre Astronomy Legacy Team 90 GHz" (MALT90) survey, we search for the presence of infall and outflow associated with these cores. We will subsequently use the "3D Molecular Line Radiative Transfer Code" (MOLLIE) to constrain properties of the infall and outflow, such as velocity and mass flow. The aim of the project is to determine how common infall and outflow are in star forming cores, hence providing valuable constraints on the timescales and physical process involved in massive star formation.

  16. Efficient Online Aggregates in Dense-Region-Based Data Cube Representations

    NASA Astrophysics Data System (ADS)

    Haddadin, Kais; Lauer, Tobias

    In-memory OLAP systems require a space-efficient representation of sparse data cubes in order to accommodate large data sets. On the other hand, most efficient online aggregation techniques, such as prefix sums, are built on dense array-based representations. These are often not applicable to real-world data due to the size of the arrays which usually cannot be compressed well, as most sparsity is removed during pre-processing. A possible solution is to identify dense regions in a sparse cube and only represent those using arrays, while storing sparse data separately, e.g. in a spatial index structure. Previous dense-region-based approaches have concentrated mainly on the effectiveness of the dense-region detection (i.e. on the space-efficiency of the result). However, especially in higher-dimensional cubes, data is usually more cluttered, resulting in a potentially large number of small dense regions, which negatively affects query performance on such a structure. In this paper, our focus is not only on space-efficiency but also on time-efficiency, both for the initial dense-region extraction and for queries carried out in the resulting hybrid data structure. We describe two methods to trade available memory for increased aggregate query performance. In addition, optimizations in our approach significantly reduce the time to build the initial data structure compared to former systems. Also, we present a straightforward adaptation of our approach to support multi-core or multi-processor architectures, which can further enhance query performance. Experiments with different real-world data sets show how various parameter settings can be used to adjust the efficiency and effectiveness of our algorithms.

  17. Non-extensive trends in the size distribution of coding and non-coding DNA sequences in the human genome

    NASA Astrophysics Data System (ADS)

    Oikonomou, Th.; Provata, A.

    2006-03-01

    We study the primary DNA structure of four of the most completely sequenced human chromosomes (including chromosome 19 which is the most dense in coding), using non-extensive statistics. We show that the exponents governing the spatial decay of the coding size distributions vary between 5.2 ≤r ≤5.7 for the short scales and 1.45 ≤q ≤1.50 for the large scales. On the contrary, the exponents governing the spatial decay of the non-coding size distributions in these four chromosomes, take the values 2.4 ≤r ≤3.2 for the short scales and 1.50 ≤q ≤1.72 for the large scales. These results, in particular the values of the tail exponent q, indicate the existence of correlations in the coding and non-coding size distributions with tendency for higher correlations in the non-coding DNA.

  18. Discussion on LDPC Codes and Uplink Coding

    NASA Technical Reports Server (NTRS)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.

  19. Manually operated coded switch

    DOEpatents

    Barnette, Jon H.

    1978-01-01

    The disclosure relates to a manually operated recodable coded switch in which a code may be inserted, tried and used to actuate a lever controlling an external device. After attempting a code, the switch's code wheels must be returned to their zero positions before another try is made.

  20. Parafermion stabilizer codes

    NASA Astrophysics Data System (ADS)

    Güngördü, Utkan; Nepal, Rabindra; Kovalev, Alexey A.

    2014-10-01

    We define and study parafermion stabilizer codes, which can be viewed as generalizations of Kitaev's one-dimensional (1D) model of unpaired Majorana fermions. Parafermion stabilizer codes can protect against low-weight errors acting on a small subset of parafermion modes in analogy to qudit stabilizer codes. Examples of several smallest parafermion stabilizer codes are given. A locality-preserving embedding of qudit operators into parafermion operators is established that allows one to map known qudit stabilizer codes to parafermion codes. We also present a local 2D parafermion construction that combines topological protection of Kitaev's toric code with additional protection relying on parity conservation.

  1. ARA type protograph codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush (Inventor); Abbasfar, Aliazam (Inventor); Jones, Christopher R. (Inventor); Dolinar, Samuel J. (Inventor); Thorpe, Jeremy C. (Inventor); Andrews, Kenneth S. (Inventor); Yao, Kung (Inventor)

    2008-01-01

    An apparatus and method for encoding low-density parity check codes. Together with a repeater, an interleaver and an accumulator, the apparatus comprises a precoder, thus forming accumulate-repeat-accumulate (ARA codes). Protographs representing various types of ARA codes, including AR3A, AR4A and ARJA codes, are described. High performance is obtained when compared to the performance of current repeat-accumulate (RA) or irregular-repeat-accumulate (IRA) codes.

  2. QR Codes 101

    ERIC Educational Resources Information Center

    Crompton, Helen; LaFrance, Jason; van 't Hooft, Mark

    2012-01-01

    A QR (quick-response) code is a two-dimensional scannable code, similar in function to a traditional bar code that one might find on a product at the supermarket. The main difference between the two is that, while a traditional bar code can hold a maximum of only 20 digits, a QR code can hold up to 7,089 characters, so it can contain much more…

  3. Dense gas shock tube: Design and analysis

    NASA Astrophysics Data System (ADS)

    Fergason, Stephen Harrison

    The study of BZT fluids in the single-phase vapor region is largely unexamined experimentally. To date, only one experimental study focused on nonclassical behavior in the single-phase vapor region. A new experimental program is proposed to examine the possibility of generating nonclassical behaviors in a shock tube apparatus. A design methodology is developed to identify the most important experimental characteristics and provide appropriate analytical and computational tools for subsequent study. Analysis suggests initial conditions, viscous effects, and wave interference as critical experimental characteristics. A shock tube design is proposed based on the results of the methodology. An algorithm is developed and applied to classical state equations to generate experimentally feasible initial conditions which maximize the possibility of detecting a single-phase rarefaction shock wave within experimental accuracy and precision. The algorithm was applied to a commercially available fluid thought to exhibit dense gas behavior. It was found that the range of possible initial conditions generating dense gas phenomena is larger than previously assumed. The shock tube is computationally modeled to validate the triple-discontinuity initial conditions and investigate the appropriate design dimensions. A two-step, flux-limited, total variation diminishing scheme was implemented to integrate the unsteady Navier-Stokes equations using three independent gas models. The triple-discontinuity flow field was verified with simulations. A novel shock tube was constructed based on the previous analysis. A sixteen-foot stainless steel pipe with a single diaphragm was placed within a series of electric ovens. The test section thermal environment was controlled utilizing sixteen independent PID control loops. Initial conditions similar in pressure and temperature to dense gas conditions were generated for nitrogen gas. The nitrogen test results were compared with classical one

  4. Implementation and Re nement of a Comprehensive Model for Dense Granular Flows

    SciTech Connect

    Sundaresan, Sankaran

    2015-09-30

    Dense granular ows are ubiquitous in both natural and industrial processes. They manifest three di erent ow regimes, each exhibiting its own dependence on solids volume fraction, shear rate, and particle-level properties. This research project sought to develop continuum rheological models for dense granular ows that bridges multiple regimes of ow, implement them in open-source platforms for gas-particle ows and perform test simulations. The rst phase of the research covered in this project involved implementation of a steady- shear rheological model that bridges quasi-static, intermediate and inertial regimes of ow into MFIX (Multiphase Flow with Interphase eXchanges - a general purpose computer code developed at the National Energy Technology Laboratory). MFIX simulations of dense granular ows in hourglass-shaped hopper were then performed as test examples. The second phase focused on formulation of a modi ed kinetic theory for frictional particles that can be used over a wider range of particle volume fractions and also apply for dynamic, multi- dimensional ow conditions. To guide this work, simulations of simple shear ows of identical mono-disperse spheres were also performed using the discrete element method. The third phase of this project sought to develop and implement a more rigorous treatment of boundary e ects. Towards this end, simulations of simple shear ows of identical mono-disperse spheres con ned between parallel plates were performed and analyzed to formulate compact wall boundary conditions that can be used for dense frictional ows at at frictional boundaries. The fourth phase explored the role of modest levels of cohesive interactions between particles on the dense phase rheology. The nal phase of this project focused on implementation and testing of the modi ed kinetic theory in MFIX and running bin-discharge simulations as test examples.

  5. DENSE: efficient and prior knowledge-driven discovery of phenotype-associated protein functional modules

    PubMed Central

    2011-01-01

    proteins are likely associated with the target phenotype. The DENSE code can be downloaded from http://www.freescience.org/cs/DENSE/ PMID:22024446

  6. Optimization of the lead probe neutron detector.

    SciTech Connect

    Ziegler, Lee; Ruiz, Carlos L.; Franklin, James Kenneth; Cooper, Gary Wayne; Nelson, Alan J.

    2004-03-01

    The lead probe neutron detector was originally designed by Spencer and Jacobs in 1965. The detector is based on lead activation due to the following neutron scattering reactions: {sup 207}Pb(n, n'){sup 207m}Pb and {sup 208}Pb(n, 2n){sup 207m}Pb. Delayed gammas from the metastable state of {sup 207m}Pb are counted using a plastic scintillator. The half-life of {sup 207m}Pb is 0.8 seconds. In the work reported here, MCNP was used to optimize the efficiency of the lead probe by suitably modifying the original geometry. A prototype detector was then built and tested. A 'layer cake' design was investigated in which thin (< 5 mm) layers of lead were sandwiched between thicker ({approx} 1 - 2 cm) layers of scintillator. An optimized 'layer cake' design had Figures of Merit (derived from the code) which were a factor of 3 greater than the original lead probe for DD neutrons, and a factor of 4 greater for DT neutrons, while containing 30% less lead. A smaller scale, 'proof of principle' prototype was built by Bechtel/Nevada to verify the code results. Its response to DD neutrons was measured using the DD dense plasma focus at Texas A&M and it conformed to the predicted performance. A voltage and discriminator sweep was performed to determine optimum sensitivity settings. It was determined that a calibration operating point could be obtained using a {sup 133}Ba 'bolt' as is the case with the original lead probe.

  7. Constitutive relations for steady, dense granular flows

    NASA Astrophysics Data System (ADS)

    Vescovi, D.; Berzi, D.; di Prisco, C. G.

    2011-12-01

    In the recent past, the flow of dense granular materials has been the subject of many scientific works; this is due to the large number of natural phenomena involving solid particles flowing at high concentration (e.g., debris flows and landslides). In contrast with the flow of dilute granular media, where the energy is essentially dissipated in binary collisions, the flow of dense granular materials is characterized by multiple, long-lasting and frictional contacts among the particles. The work focuses on the mechanical response of dry granular materials under steady, simple shear conditions. In particular, the goal is to obtain a complete rheology able to describe the material behavior within the entire range of concentrations for which the flow can be considered dense. The total stress is assumed to be the linear sum of a frictional and a kinetic component. The frictional and the kinetic contribution are modeled in the context of the critical state theory [8, 10] and the kinetic theory of dense granular gases [1, 3, 7], respectively. In the critical state theory, the granular material approaches a certain attractor state, independent on the initial arrangement, characterized by the capability of developing unlimited shear strains without any change in the concentration. Given that a disordered granular packing exists only for a range of concentration between the random loose and close packing [11], a form for the concentration dependence of the frictional normal stress that makes the latter vanish at the random loose packing is defined. In the kinetic theory, the particles are assumed to interact through instantaneous, binary and uncorrelated collisions. A new state variable of the problem is introduced, the granular temperature, which accounts for the velocity fluctuations. The model has been extended to account for the decrease in the energy dissipation due to the existence of correlated motion among the particles [5, 6] and to deal with non

  8. Plasmon resonance in warm dense matter

    SciTech Connect

    Thiele, R; Bornath, T; Fortmann, C; Holl, A; Redmer, R; Reinholz, H; Ropke, G; Wierling, A; Glenzer, S H; Gregori, G

    2008-02-21

    Collective Thomson scattering with extreme ultraviolet light or x-rays is shown to allow for a robust measurement of the free electron density in dense plasmas. Collective excitations like plasmons appear as maxima in the scattering signal. Their frequency position can directly be related to the free electron density. The range of applicability of the standard Gross-Bohm dispersion relation and of an improved dispersion relation in comparison to calculations based on the dielectric function in random phase approximation is investigated. More important, this well-established treatment of Thomson scattering on free electrons is generalized in the Born-Mermin approximation by including collisions. We show that, in the transition region from collective to non-collective scattering, the consideration of collisions is important.

  9. GRAPE-6 Simulations of Dense Star Clusters

    NASA Astrophysics Data System (ADS)

    Slavin, Shawn D.; Maxwell, J. E.; Cohn, H. N.; Lugger, P. M.

    2007-12-01

    We report on recent results from a long-term program of N-body simulations of dense star cluster evolution which is being done with GRAPE-6 systems at Indiana University and Purdue University Calumet. We have been simulating cases of star cluster evolution with a particular focus on the dynamical evolution of hard binary populations of varying size. Initial models with a range of mass spectra, both with and without primordial binary populations, are being investigated to points well beyond core collapse. Our goal is to better understand the evoultion of compact binary populations in collapsed-core globular clusters. Observations of collapsed-core clusters with HST and Chandra have revealed populations of hard, X-ray binaries well outside the cluster core. Our work is focused on understanding the diffusion of these dynamically hardened binaries to regions in the cluster halo and the robustness of this process in models with mass spectra versus single-mass models.

  10. Kaon condensation in dense stellar matter

    SciTech Connect

    Lee, Chang-Hwan; Rho, M. |

    1995-03-01

    This article combines two talks given by the authors and is based on Works done in collaboration with G.E. Brown and D.P. Min on kaon condensation in dense baryonic medium treated in chiral perturbation theory using heavy-baryon formalism. It contains, in addition to what was recently published, astrophysical backgrounds for kaon condensation discussed by Brown and Bethe, a discussion on a renormalization-group analysis to meson condensation worked out together with H.K. Lee and S.J. Sin, and the recent results of K.M. Westerberg in the bound-state approach to the Skyrme model. Negatively charged kaons are predicted to condense at a critical density 2 {approx_lt} {rho}/{rho}o {approx_lt} 4, in the range to allow the intriguing new phenomena predicted by Brown and Bethe to take place in compact star matter.

  11. Carbon nitride frameworks and dense crystalline polymorphs

    NASA Astrophysics Data System (ADS)

    Pickard, Chris J.; Salamat, Ashkan; Bojdys, Michael J.; Needs, Richard J.; McMillan, Paul F.

    2016-09-01

    We used ab initio random structure searching (AIRSS) to investigate polymorphism in C3N4 carbon nitride as a function of pressure. Our calculations reveal new framework structures, including a particularly stable chiral polymorph of space group P 43212 containing mixed s p2 and s p3 bonding, that we have produced experimentally and recovered to ambient conditions. As pressure is increased a sequence of structures with fully s p3 -bonded C atoms and three-fold-coordinated N atoms is predicted, culminating in a dense P n m a phase above 250 GPa. Beyond 650 GPa we find that C3N4 becomes unstable to decomposition into diamond and pyrite-structured CN2.

  12. Prediction of viscosity of dense fluid mixtures

    NASA Astrophysics Data System (ADS)

    Royal, Damian D.; Vesovic, Velisa; Trusler, J. P. Martin; Wakeham, William. A.

    The Vesovic-Wakeham (VW) method of predicting the viscosity of dense fluid mixtures has been improved by implementing new mixing rules based on the rigid sphere formalism. The proposed mixing rules are based on both Lebowitz's solution of the Percus-Yevick equation and on the Carnahan-Starling equation. The predictions of the modified VW method have been compared with experimental viscosity data for a number of diverse fluid mixtures: natural gas, hexane + hheptane, hexane + octane, cyclopentane + toluene, and a ternary mixture of hydrofluorocarbons (R32 + R125 + R134a). The results indicate that the proposed improvements make possible the extension of the original VW method to liquid mixtures and to mixtures containing polar species, while retaining its original accuracy.

  13. Granular flow model for dense planetary rings

    SciTech Connect

    Borderies, N.; Goldreich, P.; Tremaine, S.

    1985-09-01

    In the present study of the viscosity of a differentially rotating particle disk, in the limiting case where the particles are densely packed and their collective behavior resembles that of a liquid, the pressure tensor is derived from both the equations of hydrodynamics and a simple kinetic model of collisions due to Haff (1983). Density waves and narrow circular rings are unstable if the liquid approximation applies, and the consequent nonlinear perturbations may generate splashing of the ring material in the vertical direction. These results are pertinent to the origin of the ellipticities of ringlets, the nonaxisymmetric features near the outer edge of the Saturn B ring, and unexplained residuals in kinematic models of the Saturn and Uranus rings. 24 references.

  14. Nonlinear extraordinary wave in dense plasma

    SciTech Connect

    Krasovitskiy, V. B.; Turikov, V. A.

    2013-10-15

    Conditions for the propagation of a slow extraordinary wave in dense magnetized plasma are found. A solution to the set of relativistic hydrodynamic equations and Maxwell’s equations under the plasma resonance conditions, when the phase velocity of the nonlinear wave is equal to the speed of light, is obtained. The deviation of the wave frequency from the resonance frequency is accompanied by nonlinear longitudinal-transverse oscillations. It is shown that, in this case, the solution to the set of self-consistent equations obtained by averaging the initial equations over the period of high-frequency oscillations has the form of an envelope soliton. The possibility of excitation of a nonlinear wave in plasma by an external electromagnetic pulse is confirmed by numerical simulations.

  15. Properties of industrial dense gas plumes

    NASA Astrophysics Data System (ADS)

    Shaver, E. M.; Forney, L. J.

    Hazardous gases and vapors are often discharged into the atmosphere from industrial plants during catastrophic events (e.g. Union Carbide incident in Bhopal, India). In many cases the discharged components are more dense than air and settle to the ground surface downstream from the stack exit. In the present paper, the buoyant plume model of Hoult, Fay and Forney (1969, J. Air Pollut. Control Ass. 19, 585-590.) has been altered to predict the properties of hazardous discharges. In particular, the plume impingement point, radius and concentration are predicted for typical stack exit conditions, wind speeds and temperature profiles. Asymptotic expressions for plume properties at the impingement point are also derived for a constant crosswind and neutral temperature profile. These formulae are shown to be useful for all conditions.

  16. Oxygen ion-conducting dense ceramic

    DOEpatents

    Balachandran, Uthamalingam; Kleefisch, Mark S.; Kobylinski, Thaddeus P.; Morissette, Sherry L.; Pei, Shiyou

    1996-01-01

    Preparation, structure, and properties of mixed metal oxide compositions containing at least strontium, cobalt, iron and oxygen are described. The crystalline mixed metal oxide compositions of this invention have, for example, structure represented by Sr.sub..alpha. (Fe.sub.1-x Co.sub.x).sub..alpha.+.beta. O.sub..delta. where x is a number in a range from 0.01 to about 1, .alpha. is a number in a range from about 1 to about 4, .beta. is a number in a range upward from 0 to about 20, and .delta. is a number which renders the compound charge neutral, and wherein the composition has a non-perovskite structure. Use of the mixed metal oxides in dense ceramic membranes which exhibit oxygen ionic conductivity and selective oxygen separation, are described as well as their use in separation of oxygen from an oxygen-containing gaseous mixture.

  17. Oxygen ion-conducting dense ceramic

    DOEpatents

    Balachandran, Uthamalingam; Kleefisch, Mark S.; Kobylinski, Thaddeus P.; Morissette, Sherry L.; Pei, Shiyou

    1997-01-01

    Preparation, structure, and properties of mixed metal oxide compositions containing at least strontium, cobalt, iron and oxygen are described. The crystalline mixed metal oxide compositions of this invention have, for example, structure represented by Sr.sub..alpha. (Fe.sub.1-x Co.sub.x).sub..alpha.+.beta. O.sub..delta. where x is a number in a range from 0.01 to about 1, .alpha. is a number in a range from about 1 to about 4, .beta. is a number in a range upward from 0 to about 20, and .delta. is a number which renders the compound charge neutral, and wherein the composition has a non-perovskite structure. Use of the mixed metal oxides in dense ceramic membranes which exhibit oxygen ionic conductivity and selective oxygen separation, are described as well as their use in separation of oxygen from an oxygen-containing gaseous mixture.

  18. Complexation-induced phase separation: preparation of composite membranes with a nanometer-thin dense skin loaded with metal ions.

    PubMed

    Villalobos, Luis Francisco; Karunakaran, Madhavan; Peinemann, Klaus-Viktor

    2015-05-13

    We present the development of a facile phase-inversion method for forming asymmetric membranes with a precise high metal ion loading capacity in only the dense layer. The approach combines the use of macromolecule-metal intermolecular complexes to form the dense layer of asymmetric membranes with nonsolvent-induced phase separation to form the porous support. This allows the independent optimization of both the dense layer and porous support while maintaining the simplicity of a phase-inversion process. Moreover, it facilitates control over (i) the thickness of the dense layer throughout several orders of magnitude from less than 15 nm to more than 6 μm, (ii) the type and amount of metal ions loaded in the dense layer, (iii) the morphology of the membrane surface, and (iv) the porosity and structure of the support. This simple and scalable process provides a new platform for building multifunctional membranes with a high loading of well-dispersed metal ions in the dense layer.

  19. Visualizing expanding warm dense matter heated by laser-generated ion beams

    SciTech Connect

    Bang, Woosuk

    2015-08-24

    This PowerPoint presentation concluded with the following. We calculated the expected heating per atom and temperatures of various target materials using a Monte Carlo simulation code and SESAME EOS tables. We used aluminum ion beams to heat gold and diamond uniformly and isochorically. A streak camera imaged the expansion of warm dense gold (5.5 eV) and diamond (1.7 eV). GXI-X recorded all 16 x-ray images of the unheated gold bar targets proving that it could image the motion of the gold/diamond interface of the proposed target.

  20. The Effects of Stellar Dynamics on the Evolution of Young, Dense Stellar Systems

    NASA Astrophysics Data System (ADS)

    Belkus, H.; van Bever, J.; Vanbeveren, D.

    In this paper, we report on first results of a project in Brussels in which we study the effects of stellar dynamics on the evolution of young dense stellar systems using 3 decades of expertise in massive-star evolution and our population (number and spectral) synthesis code. We highlight an unconventionally formed object scenario (UFO-scenario) for Wolf Rayet binaries and study the effects of a luminous blue variable-type instability wind mass-loss formalism on the formation of intermediate-mass black holes.

  1. A Comparative Study on Seismic Analysis of Bangladesh National Building Code (BNBC) with Other Building Codes

    NASA Astrophysics Data System (ADS)

    Bari, Md. S.; Das, T.

    2013-09-01

    Tectonic framework of Bangladesh and adjoining areas indicate that Bangladesh lies well within an active seismic zone. The after effect of earthquake is more severe in an underdeveloped and a densely populated country like ours than any other developed countries. Bangladesh National Building Code (BNBC) was first established in 1993 to provide guidelines for design and construction of new structure subject to earthquake ground motions in order to minimize the risk to life for all structures. A revision of BNBC 1993 is undergoing to make this up to date with other international building codes. This paper aims at the comparison of various provisions of seismic analysis as given in building codes of different countries. This comparison will give an idea regarding where our country stands when it comes to safety against earth quake. Primarily, various seismic parameters in BNBC 2010 (draft) have been studied and compared with that of BNBC 1993. Later, both 1993 and 2010 edition of BNBC codes have been compared graphically with building codes of other countries such as National Building Code of India 2005 (NBC-India 2005), American Society of Civil Engineering 7-05 (ASCE 7-05). The base shear/weight ratios have been plotted against the height of the building. The investigation in this paper reveals that BNBC 1993 has the least base shear among all the codes. Factored Base shear values of BNBC 2010 are found to have increased significantly than that of BNBC 1993 for low rise buildings (≤20 m) around the country than its predecessor. Despite revision of the code, BNBC 2010 (draft) still suggests less base shear values when compared to the Indian and American code. Therefore, this increase in factor of safety against the earthquake imposed by the proposed BNBC 2010 code by suggesting higher values of base shear is appreciable.

  2. Asymmetric quantum convolutional codes

    NASA Astrophysics Data System (ADS)

    La Guardia, Giuliano G.

    2016-01-01

    In this paper, we construct the first families of asymmetric quantum convolutional codes (AQCCs). These new AQCCs are constructed by means of the CSS-type construction applied to suitable families of classical convolutional codes, which are also constructed here. The new codes have non-catastrophic generator matrices, and they have great asymmetry. Since our constructions are performed algebraically, i.e. we develop general algebraic methods and properties to perform the constructions, it is possible to derive several families of such codes and not only codes with specific parameters. Additionally, several different types of such codes are obtained.

  3. Development and evaluation of a dense gas plume model

    SciTech Connect

    Matthias, C.S.

    1994-12-31

    The dense gas plume model (continuous release) described in this paper has been developed using the same principles as for a dense gas puff model (instantaneous release). It is a box model for which the main goal is to predict the height H, width W, and maximum concentration C{sub b} for a steady dense plume. A secondary goal is to distribute the mass more realistically by empirically attaching Gaussian distributions in the horizontal and vertical directions. For ease of reference, the models and supporting programs will be referred to as DGM (Dense Gas Models).

  4. meta-DENSE complex acquisition for reduced intravoxel dephasing

    NASA Astrophysics Data System (ADS)

    Aletras, Anthony H.; Arai, Andrew E.

    2004-08-01

    Displacement encoding with stimulated echoes (DENSE) with a meta-DENSE readout and RF phase cycling to suppress the STEAM anti-echo is described for reducing intravoxel dephasing signal loss. This RF phase cycling scheme, when combined with existing meta-DENSE suppression of the T1 recovering signal, yields higher quality DENSE myocardial strain maps. Phantom and human images are provided to demonstrate the technique, which is capable of acquiring phase contrast displacement encoded images at low encoding gradient strengths providing better spatial resolution and less signal loss due to intravoxel dephasing than prior methods.

  5. Asymmetric effect on single-file dense pedestrian flow

    NASA Astrophysics Data System (ADS)

    Kuang, Hua; Cai, Mei-Jing; Li, Xing-Li; Song, Tao

    2015-11-01

    In this paper, an extended optimal velocity model is proposed to simulate single-file dense pedestrian flow by considering asymmetric interaction (i.e. attractive force and repulsive force), which depends on the different distances between pedestrians. The stability condition of this model is obtained by using the linear stability theory. The phase diagram comparison and analysis show that asymmetric effect plays an important role in strengthening the stabilization of system. The modified Korteweg-de Vries (mKdV) equation near the critical point is derived by applying the reductive perturbation method. The pedestrian jam could be described by the kink-antikink soliton solution for the mKdV equation. From the simulation of space-time evolution of the pedestrians distance, it can be found that the asymmetric interaction is more efficient compared to the symmetric interaction in suppressing the pedestrian jam. Furthermore, the simulation results are consistent with the theoretical analysis as well as reproduce experimental phenomena better.

  6. Edge compression techniques for visualization of dense directed graphs.

    PubMed

    Dwyer, Tim; Henry Riche, Nathalie; Marriott, Kim; Mears, Christopher

    2013-12-01

    We explore the effectiveness of visualizing dense directed graphs by replacing individual edges with edges connected to 'modules'-or groups of nodes-such that the new edges imply aggregate connectivity. We only consider techniques that offer a lossless compression: that is, where the entire graph can still be read from the compressed version. The techniques considered are: a simple grouping of nodes with identical neighbor sets; Modular Decomposition which permits internal structure in modules and allows them to be nested; and Power Graph Analysis which further allows edges to cross module boundaries. These techniques all have the same goal--to compress the set of edges that need to be rendered to fully convey connectivity--but each successive relaxation of the module definition permits fewer edges to be drawn in the rendered graph. Each successive technique also, we hypothesize, requires a higher degree of mental effort to interpret. We test this hypothetical trade-off with two studies involving human participants. For Power Graph Analysis we propose a novel optimal technique based on constraint programming. This enables us to explore the parameter space for the technique more precisely than could be achieved with a heuristic. Although applicable to many domains, we are motivated by--and discuss in particular--the application to software dependency analysis.

  7. Maximum constrained sparse coding for image representation

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; Zhao, Danpei; Jiang, Zhiguo

    2015-12-01

    Sparse coding exhibits good performance in many computer vision applications by finding bases which capture highlevel semantics of the data and learning sparse coefficients in terms of the bases. However, due to the fact that bases are non-orthogonal, sparse coding can hardly preserve the samples' similarity, which is important for discrimination. In this paper, a new image representing method called maximum constrained sparse coding (MCSC) is proposed. Sparse representation with more active coefficients means more similarity information, and the infinite norm is added to the solution for this purpose. We solve the optimizer by constraining the codes' maximum and releasing the residual to other dictionary atoms. Experimental results on image clustering show that our method can preserve the similarity of adjacent samples and maintain the sparsity of code simultaneously.

  8. Neuronal codes for visual perception and memory.

    PubMed

    Quian Quiroga, Rodrigo

    2016-03-01

    In this review, I describe and contrast the representation of stimuli in visual cortical areas and in the medial temporal lobe (MTL). While cortex is characterized by a distributed and implicit coding that is optimal for recognition and storage of semantic information, the MTL shows a much sparser and explicit coding of specific concepts that is ideal for episodic memory. I will describe the main characteristics of the coding in the MTL by the so-called concept cells and will then propose a model of the formation and recall of episodic memory based on partially overlapping assemblies. PMID:26707718

  9. Coded continuous wave meteor radar

    NASA Astrophysics Data System (ADS)

    Vierinen, Juha; Chau, Jorge L.; Pfeffer, Nico; Clahsen, Matthias; Stober, Gunter

    2016-03-01

    The concept of a coded continuous wave specular meteor radar (SMR) is described. The radar uses a continuously transmitted pseudorandom phase-modulated waveform, which has several advantages compared to conventional pulsed SMRs. The coding avoids range and Doppler aliasing, which are in some cases problematic with pulsed radars. Continuous transmissions maximize pulse compression gain, allowing operation at lower peak power than a pulsed system. With continuous coding, the temporal and spectral resolution are not dependent on the transmit waveform and they can be fairly flexibly changed after performing a measurement. The low signal-to-noise ratio before pulse compression, combined with independent pseudorandom transmit waveforms, allows multiple geographically separated transmitters to be used in the same frequency band simultaneously without significantly interfering with each other. Because the same frequency band can be used by multiple transmitters, the same interferometric receiver antennas can be used to receive multiple transmitters at the same time. The principles of the signal processing are discussed, in addition to discussion of several practical ways to increase computation speed, and how to optimally detect meteor echoes. Measurements from a campaign performed with a coded continuous wave SMR are shown and compared with two standard pulsed SMR measurements. The type of meteor radar described in this paper would be suited for use in a large-scale multi-static network of meteor radar transmitters and receivers. Such a system would be useful for increasing the number of meteor detections to obtain improved meteor radar data products.

  10. An experimental study of dense aerosol aggregations

    NASA Astrophysics Data System (ADS)

    Dhaubhadel, Rajan

    We demonstrated that an aerosol can gel. This gelation was then used for a one-step method to produce an ultralow density porous carbon or silica material. This material was named an aerosol gel because it was made via gelation of particles in the aerosol phase. The carbon and silica aerosol gels had high specific surface area (200--350 sq m2/g for carbon and 300--500 sq m2/g for silica) and an extremely low density (2.5--6.0 mg/cm3), properties similar to conventional aerogels. Key aspects to form a gel from an aerosol are large volume fraction, ca. 10-4 or greater, and small primary particle size, 50 nm or smaller, so that the gel time is fast compared to other characteristic times. Next we report the results of a study of the cluster morphology and kinetics of a dense aggregating aerosol system using the small angle light scattering technique. The soot particles started as individual monomers, ca. 38 nm radius, grew to bigger clusters with time and finally stopped evolving after spanning a network across the whole system volume. This spanning is aerosol gelation. The gelled system showed a hybrid morphology with a lower fractal dimension at length scales of a micron or smaller and a higher fractal dimension at length scales greater than a micron. The study of the kinetics of the aggregating system showed that when the system gelled, the aggregation kernel homogeneity lambda attained a value 0.4 or higher. The magnitude of the aggregation kernel showed an increase with increasing volume fraction. We also used image analysis technique to study the cluster morphology. From the digitized pictures of soot clusters the cluster morphology was determined by two different methods: structure factor and perimeter analysis. We find a hybrid, superaggregate morphology characterized by a fractal dimension of Df ≈ to 1.8 between the monomer size, ca. 50 nm, and 1 mum micron and Df ≈ to 2.6 at larger length scales up to ˜ 10 mum. The superaggregate morphology is a

  11. Cellulases and coding sequences

    DOEpatents

    Li, Xin-Liang; Ljungdahl, Lars G.; Chen, Huizhong

    2001-02-20

    The present invention provides three fungal cellulases, their coding sequences, recombinant DNA molecules comprising the cellulase coding sequences, recombinant host cells and methods for producing same. The present cellulases are from Orpinomyces PC-2.

  12. Cellulases and coding sequences

    DOEpatents

    Li, Xin-Liang; Ljungdahl, Lars G.; Chen, Huizhong

    2001-01-01

    The present invention provides three fungal cellulases, their coding sequences, recombinant DNA molecules comprising the cellulase coding sequences, recombinant host cells and methods for producing same. The present cellulases are from Orpinomyces PC-2.

  13. QR Code Mania!

    ERIC Educational Resources Information Center

    Shumack, Kellie A.; Reilly, Erin; Chamberlain, Nik

    2013-01-01

    space, has error-correction capacity, and can be read from any direction. These codes are used in manufacturing, shipping, and marketing, as well as in education. QR codes can be created to produce…

  14. Navigated DENSE strain imaging for post-radiofrequency ablation lesion assessment in the swine left atria

    PubMed Central

    Schmidt, Ehud J.; Fung, Maggie M.; Ciris, Pelin Aksit; Song, Ting; Shankaranarayanan, Ajit; Holmvang, Godtfred; Gupta, Sandeep N.; Chaput, Miguel; Levine, Robert A.; Ruskin, Jeremy; Reddy, Vivek Y.; D'avila, Andre; Aletras, Anthony H.; Danik, Stephan B.

    2014-01-01

    Aims Prior work has demonstrated that magnetic resonance imaging (MRI) strain can separate necrotic/stunned myocardium from healthy myocardium in the left ventricle (LV). We surmised that high-resolution MRI strain, using navigator-echo-triggered DENSE, could differentiate radiofrequency ablated tissue around the pulmonary vein (PV) from tissue that had not been damaged by radiofrequency energy, similarly to navigated 3D myocardial delayed enhancement (3D-MDE). Methods and results A respiratory-navigated 2D-DENSE sequence was developed, providing strain encoding in two spatial directions with 1.2 × 1.0 × 4 mm3 resolution. It was tested in the LV of infarcted sheep. In four swine, incomplete circumferential lesions were created around the right superior pulmonary vein (RSPV) using ablation catheters, recorded with electro-anatomic mapping, and imaged 1 h later using atrial-diastolic DENSE and 3D-MDE at the left atrium/RSPV junction. DENSE detected ablation gaps (regions with >12% strain) in similar positions to 3D-MDE (2D cross-correlation 0.89 ± 0.05). Low-strain (<8%) areas were, on average, 33% larger than equivalent MDE regions, so they include both injured and necrotic regions. Optimal DENSE orientation was perpendicular to the PV trunk, with high shear strain in adjacent viable tissue appearing as a sensitive marker of ablation lesions. Conclusions Magnetic resonance imaging strain may be a non-contrast alternative to 3D-MDE in intra-procedural monitoring of atrial ablation lesions. PMID:24014803

  15. Dense deposit disease is not a membranoproliferative glomerulonephritis.

    PubMed

    Walker, Patrick D; Ferrario, Franco; Joh, Kensuke; Bonsib, Stephen M

    2007-06-01

    Dense deposit disease (first reported in 1962) was classified as subtype II of membranoproliferative glomerulonephritis in the early 1970s. Over the last 30 years, marked differences in etiology and pathogenesis between type I membranoproliferative glomerulonephritis and dense deposit disease have become apparent. The sporadic observation that dense deposit disease can be seen with markedly different light microscopy appearances prompted this study. The goal was to examine a large number of renal biopsies from around the world to characterize the histopathologic features of dense deposit disease. Eighty-one cases of dense deposit disease were received from centers across North America, Europe and Japan. Biopsy reports, light microscopy materials and electron photomicrographs were reviewed and histopathologic features scored. Sixty-nine cases were acceptable for review. Five patterns were seen: (1) membranoproliferative n=17; (2) mesangial proliferative n=30; (3) crescentic n=12; (4) acute proliferative and exudative n=8 and (5) unclassified n=2. The age range was 3-67 years, with 74% in the range of 3-20 years; 15% 21-30 years and 11% over 30 years. Males accounted for 54% and females 46%. All patients with either crescentic dense deposit disease or acute proliferative dense deposit disease were between the ages of 3 and 18 years. The essential diagnostic feature of dense deposit disease is not the membranoproliferative pattern but the presence of electron dense transformation of the glomerular basement membranes. Based upon this study and the extensive data developed over the past 30 years, dense deposit disease is clinically distinct from membranoproliferative glomerulonephritis and is morphologically heterogeneous with only a minority of cases having a membranoproliferative pattern. Therefore, dense deposit disease should no longer be regarded as a subtype of membranoproliferative glomerulonephritis. PMID:17396142

  16. Microporous polyvinylidene fluoride film with dense surface enables efficient piezoelectric conversion

    NASA Astrophysics Data System (ADS)

    Chen, Dajing; Zhang, John X. J.

    2015-05-01

    We demonstrate that asymmetric porous polyvinylidene fluoride (PVDF) film, with pores mostly distributed in the bulk but not at the surfaces, can be used as a highly efficient piezoelectric energy generation device. For such microporous PVDF film with dense or pore-free surface, piezoelectric theory shows the efficiency of energy conversion by piezoelectric device depends upon the structure compressibility. Film mechanical properties can be controlled by dispersing micro-scale pores in a polymer matrix with a dense top layer. Piezoelectric output is enhanced by optimization of PVDF micro-structure and electromechanical coupling efficiency. The power output increased three folds with a designed three-dimensional asymmetric porous structure as compared to solid film.

  17. Evolutionary models of rotating dense stellar systems: challenges in software and hardware

    NASA Astrophysics Data System (ADS)

    Fiestas, Jose

    2016-02-01

    We present evolutionary models of rotating self-gravitating systems (e.g. globular clusters, galaxy cores). These models are characterized by the presence of initial axisymmetry due to rotation. Central black hole seeds are alternatively included in our models, and black hole growth due to consumption of stellar matter is simulated until the central potential dominates the kinematics in the core. Goal is to study the long-term evolution (~ Gyr) of relaxed dense stellar systems, which deviate from spherical symmetry, their morphology and final kinematics. With this purpose, we developed a 2D Fokker-Planck analytical code, which results we confirm by detailed N-Body techniques, applying a high performance code, developed for GPU machines. We compare our models to available observations of galactic rotating globular clusters, and conclude that initial rotation modifies significantly the shape and lifetime of these systems, and can not be neglected in studying the evolution of globular clusters, and the galaxy itself.

  18. EMF wire code research

    SciTech Connect

    Jones, T.

    1993-11-01

    This paper examines the results of previous wire code research to determines the relationship with childhood cancer, wire codes and electromagnetic fields. The paper suggests that, in the original Savitz study, biases toward producing a false positive association between high wire codes and childhood cancer were created by the selection procedure.

  19. Elemental nitrogen partitioning in dense interstellar clouds

    PubMed Central

    Daranlot, Julien; Hincelin, Ugo; Bergeat, Astrid; Costes, Michel; Loison, Jean-Christophe; Wakelam, Valentine; Hickson, Kevin M.

    2012-01-01

    Many chemical models of dense interstellar clouds predict that the majority of gas-phase elemental nitrogen should be present as N2, with an abundance approximately five orders of magnitude less than that of hydrogen. As a homonuclear diatomic molecule, N2 is difficult to detect spectroscopically through infrared or millimeter-wavelength transitions. Therefore, its abundance is often inferred indirectly through its reaction product N2H+. Two main formation mechanisms, each involving two radical-radical reactions, are the source of N2 in such environments. Here we report measurements of the low temperature rate constants for one of these processes, the N + CN reaction, down to 56 K. The measured rate constants for this reaction, and those recently determined for two other reactions implicated in N2 formation, are tested using a gas-grain model employing a critically evaluated chemical network. We show that the amount of interstellar nitrogen present as N2 depends on the competition between its gas-phase formation and the depletion of atomic nitrogen onto grains. As the reactions controlling N2 formation are inefficient, we argue that N2 does not represent the main reservoir species for interstellar nitrogen. Instead, elevated abundances of more labile forms of nitrogen such as NH3 should be present on interstellar ices, promoting the eventual formation of nitrogen-bearing organic molecules. PMID:22689957

  20. Thermochemistry of dense hydrous magnesium silicates

    NASA Technical Reports Server (NTRS)

    Bose, Kunal; Burnley, Pamela; Navrotsky, Alexandra

    1994-01-01

    Recent experimental investigations under mantle conditions have identified a suite of dense hydrous magnesium silicate (DHMS) phases that could be conduits to transport water to at least the 660 km discontinuity via mature, relatively cold, subducting slabs. Water released from successive dehydration of these phases during subduction could be responsible for deep focus earthquakes, mantle metasomatism and a host of other physico-chemical processes central to our understanding of the earth's deep interior. In order to construct a thermodynamic data base that can delineate and predict the stability ranges for DHMS phases, reliable thermochemical and thermophysical data are required. One of the major obstacles in calorimetric studies of phases synthesized under high pressure conditions has been limitation due to the small (less than 5 mg) sample mass. Our refinement of calorimeter techniques now allow precise determination of enthalpies of solution of less than 5 mg samples of hydrous magnesium silicates. For example, high temperature solution calorimetry of natural talc (Mg(0.99) Fe(0.01)Si4O10(OH)2), periclase (MgO) and quartz (SiO2) yield enthalpies of drop solution at 1044 K to be 592.2 (2.2), 52.01 (0.12) and 45.76 (0.4) kJ/mol respectively. The corresponding enthalpy of formation from oxides at 298 K for talc is minus 5908.2 kJ/mol agreeing within 0.1 percent to literature values.

  1. Thermochemistry of dense hydrous magnesium silicates

    NASA Astrophysics Data System (ADS)

    Bose, Kunal; Burnley, Pamela; Navrotsky, Alexandra

    Recent experimental investigations under mantle conditions have identified a suite of dense hydrous magnesium silicate (DHMS) phases that could be conduits to transport water to at least the 660 km discontinuity via mature, relatively cold, subducting slabs. Water released from successive dehydration of these phases during subduction could be responsible for deep focus earthquakes, mantle metasomatism and a host of other physico-chemical processes central to our understanding of the earth's deep interior. In order to construct a thermodynamic data base that can delineate and predict the stability ranges for DHMS phases, reliable thermochemical and thermophysical data are required. One of the major obstacles in calorimetric studies of phases synthesized under high pressure conditions has been limitation due to the small (less than 5 mg) sample mass. Our refinement of calorimeter techniques now allow precise determination of enthalpies of solution of less than 5 mg samples of hydrous magnesium silicates. For example, high temperature solution calorimetry of natural talc (Mg(0.99) Fe(0.01)Si4O10(OH)2), periclase (MgO) and quartz (SiO2) yield enthalpies of drop solution at 1044 K to be 592.2 (2.2), 52.01 (0.12) and 45.76 (0.4) kJ/mol respectively. The corresponding enthalpy of formation from oxides at 298 K for talc is minus 5908.2 kJ/mol agreeing within 0.1 percent to literature values.

  2. The lifetime of evaporating dense sprays

    NASA Astrophysics Data System (ADS)

    de Rivas, Alois; Villermaux, Emmanuel

    2015-11-01

    We study the processes by which a set of nearby liquid droplets (a spray) evaporates in a gas phase whose relative humidity (vapor concentration) is controlled at will. A dense spray of micron-sized water droplets is formed in air by a pneumatic atomizer and conveyed through a nozzle in a closed chamber whose vapor concentration has been pre-set to a controlled value. The resulting plume extension depends on the relative humidity of the diluting medium. When the spray plume is straight and laminar, droplets evaporate at its edge where the vapor is saturated, and diffuses through a boundary layer developing around the plume. We quantify the shape and length of the plume as a function of the injecting, vapor diffusion, thermodynamic and environment parameters. For higher injection Reynolds numbers, standard shear instabilities distort the plume into stretched lamellae, thus enhancing the diffusion of vapor from their boundary towards the diluting medium. These lamellae vanish in a finite time depending on the intensity of the stretching, and relative humidity of the environment, with a lifetime diverging close to the equilibrium limit, when the plume develops in an medium saturated in vapor. The dependences are described quantitatively.

  3. Packing frustration in dense confined fluids.

    PubMed

    Nygård, Kim; Sarman, Sten; Kjellander, Roland

    2014-09-01

    Packing frustration for confined fluids, i.e., the incompatibility between the preferred packing of the fluid particles and the packing constraints imposed by the confining surfaces, is studied for a dense hard-sphere fluid confined between planar hard surfaces at short separations. The detailed mechanism for the frustration is investigated via an analysis of the anisotropic pair distributions of the confined fluid, as obtained from integral equation theory for inhomogeneous fluids at pair correlation level within the anisotropic Percus-Yevick approximation. By examining the mean forces that arise from interparticle collisions around the periphery of each particle in the slit, we calculate the principal components of the mean force for the density profile--each component being the sum of collisional forces on a particle's hemisphere facing either surface. The variations of these components with the slit width give rise to rather intricate changes in the layer structure between the surfaces, but, as shown in this paper, the basis of these variations can be easily understood qualitatively and often also semi-quantitatively. It is found that the ordering of the fluid is in essence governed locally by the packing constraints at each single solid-fluid interface. A simple superposition of forces due to the presence of each surface gives surprisingly good estimates of the density profiles, but there remain nontrivial confinement effects that cannot be explained by superposition, most notably the magnitude of the excess adsorption of particles in the slit relative to bulk.

  4. Elemental nitrogen partitioning in dense interstellar clouds.

    PubMed

    Daranlot, Julien; Hincelin, Ugo; Bergeat, Astrid; Costes, Michel; Loison, Jean-Christophe; Wakelam, Valentine; Hickson, Kevin M

    2012-06-26

    Many chemical models of dense interstellar clouds predict that the majority of gas-phase elemental nitrogen should be present as N(2), with an abundance approximately five orders of magnitude less than that of hydrogen. As a homonuclear diatomic molecule, N(2) is difficult to detect spectroscopically through infrared or millimeter-wavelength transitions. Therefore, its abundance is often inferred indirectly through its reaction product N(2)H(+). Two main formation mechanisms, each involving two radical-radical reactions, are the source of N(2) in such environments. Here we report measurements of the low temperature rate constants for one of these processes, the N + CN reaction, down to 56 K. The measured rate constants for this reaction, and those recently determined for two other reactions implicated in N(2) formation, are tested using a gas-grain model employing a critically evaluated chemical network. We show that the amount of interstellar nitrogen present as N(2) depends on the competition between its gas-phase formation and the depletion of atomic nitrogen onto grains. As the reactions controlling N(2) formation are inefficient, we argue that N(2) does not represent the main reservoir species for interstellar nitrogen. Instead, elevated abundances of more labile forms of nitrogen such as NH(3) should be present on interstellar ices, promoting the eventual formation of nitrogen-bearing organic molecules.

  5. Oblique impact of dense granular sheets

    NASA Astrophysics Data System (ADS)

    Ellowitz, Jake; Guttenberg, Nicholas; Jaeger, Heinrich M.; Nagel, Sidney R.; Zhang, Wendy W.

    2013-11-01

    Motivated by experiments showing impacts of granular jets with non-circular cross sections produce thin ejecta sheets with anisotropic shapes, we study what happens when two sheets containing densely packed, rigid grains traveling at the same speed collide asymmetrically. Discrete particle simulations and a continuum frictional fluid model yield the same steady-state solution of two exit streams emerging from incident streams. When the incident angle Δθ is less than Δθc =120° +/-10° , the exit streams' angles differ from that measured in water sheet experiments. Below Δθc , the exit angles from granular and water sheet impacts agree. This correspondence is surprising because 2D Euler jet impact, the idealization relevant for both situations, is ill posed: a generic Δθ value permits a continuous family of solutions. Our finding that granular and water sheet impacts evolve into the same member of the solution family suggests previous proposals that perturbations such as viscous drag, surface tension or air entrapment select the actual outcome are not correct. Currently at Department of Physics, University of Oregon, Eugene, OR 97403.

  6. Mach reflection in a warm dense plasma

    SciTech Connect

    Foster, J. M.; Rosen, P. A.; Wilde, B. H.; Hartigan, P.; Perry, T. S.

    2010-11-15

    The phenomenon of irregular shock-wave reflection is of importance in high-temperature gas dynamics, astrophysics, inertial-confinement fusion, and related fields of high-energy-density science. However, most experimental studies of irregular reflection have used supersonic wind tunnels or shock tubes, and few or no data are available for Mach reflection phenomena in the plasma regime. Similarly, analytic studies have often been confined to calorically perfect gases. We report the first direct observation, and numerical modeling, of Mach stem formation for a warm, dense plasma. Two ablatively driven aluminum disks launch oppositely directed, near-spherical shock waves into a cylindrical plastic block. The interaction of these shocks results in the formation of a Mach-ring shock that is diagnosed by x-ray backlighting. The data are modeled using radiation hydrocodes developed by AWE and LANL. The experiments were carried out at the University of Rochester's Omega laser [J. M. Soures, R. L. McCrory, C. P. Verdon et al., Phys. Plasmas 3, 2108 (1996)] and were inspired by modeling [A. M. Khokhlov, P. A. Hoeflich, E. S. Oran et al., Astrophys J. 524, L107 (1999)] of core-collapse supernovae that suggest that in asymmetric supernova explosion significant mass may be ejected in a Mach-ring formation launched by bipolar jets.

  7. Droplet formation and scaling in dense suspensions

    PubMed Central

    Miskin, Marc Z.; Jaeger, Heinrich M.

    2012-01-01

    When a dense suspension is squeezed from a nozzle, droplet detachment can occur similar to that of pure liquids. While in pure liquids the process of droplet detachment is well characterized through self-similar profiles and known scaling laws, we show here the simple presence of particles causes suspensions to break up in a new fashion. Using high-speed imaging, we find that detachment of a suspension drop is described by a power law; specifically we find the neck minimum radius, rm, scales like near breakup at time τ = 0. We demonstrate data collapse in a variety of particle/liquid combinations, packing fractions, solvent viscosities, and initial conditions. We argue that this scaling is a consequence of particles deforming the neck surface, thereby creating a pressure that is balanced by inertia, and show how it emerges from topological constraints that relate particle configurations with macroscopic Gaussian curvature. This new type of scaling, uniquely enforced by geometry and regulated by the particles, displays memory of its initial conditions, fails to be self-similar, and has implications for the pressure given at generic suspension interfaces. PMID:22392979

  8. Polypeptide vesicles with densely packed multilayer membranes.

    PubMed

    Song, Ziyuan; Kim, Hojun; Ba, Xiaochu; Baumgartner, Ryan; Lee, Jung Seok; Tang, Haoyu; Leal, Cecilia; Cheng, Jianjun

    2015-05-28

    Multilamellar membranes are important building blocks for constructing self-assembled structures with improved barrier properties, such as multilamellar lipid vesicles. Polymeric vesicles (polymersomes) have attracted growing interest, but multilamellar polymersomes are much less explored. Here, we report the formation of polypeptide vesicles with unprecedented densely packed multilayer membrane structures with poly(ethylene glycol)-block-poly(γ-(4,5-dimethoxy-2-nitrobenzyl)-l-glutamate) (PEG-b-PL), an amphiphilic diblock rod-coil copolymer containing a short PEG block and a short hydrophobic rod-like polypeptide segment. The polypeptide rods undergo smectic ordering with PEG buried between the hydrophobic polypeptide layers. The size of both blocks and the rigidity of the hydrophobic polypeptide block are critical in determining the membrane structures. Increase of the PEG length in PEG-b-PL results in the formation of bilayer sheets, while using random-coil polypeptide block leads to the formation of large compound micelles. UV treatment causes ester bond cleavage of the polypeptide side chain, which induces helix-to-coil transition, change of copolymer amphiphilicity, and eventual disassembly of vesicles. These polypeptide vesicles with unique membrane structures provide a new insight into self-assembly structure control by precisely tuning the composition and conformation of polymeric amphiphiles.

  9. Proton Stopping Power in Warm Dense Hydrogen

    NASA Astrophysics Data System (ADS)

    Higginson, Drew; Chen, Sophia; Atzeni, Stefano; Gauthier, Maxence; Mangia, Feliciana; Marquès, Jean-Raphaël; Riquier, Raphaël; Fuchs, Julien

    2013-10-01

    Warm dense matter (WDM) research is fundamental to many fields of physics including fusion sciences, and astrophysical phenomena. In the WDM regime, particle stopping-power differs significantly from cold matter and ideal plasma due to free electron contributions, plasma correlation effects and electron degeneracy. The creation of WDM with temporal duration consistent with the particles probes is difficult to achieve experimentally. The short-pulse laser platform allows for the production of WDM along with relatively short bunches of protons compatible of such measurements, however, until recently, the intrinsic broadband proton spectrum was not well suited to investigate the stopping power directly. This difficulty has been overcome using a novel magnetic particle selector (ΔE/E = 10%) to select protons (in the range 100-1000 keV) as demonstrated with the ELFIE laser in LULI, France. These protons bunches probe high-density (5 × 1020 cm-3) gases (H, He) heated by a nanosecond laser to reach estimated temperatures above 100 eV. Measurement of the proton energy loss within the heated gas allows the stopping power to be determined quantitatively. The experimental results in cold matter are compared to preexisting models to give credibility to the measurement technique. The results from heated matter show that the stopping power of 450 keV protons is dramatically reduced within heated hydrogen plasma.

  10. Order and instabilities in dense bacterial colonies

    NASA Astrophysics Data System (ADS)

    Tsimring, Lev

    2012-02-01

    The structure of cell colonies is governed by the interplay of many physical and biological factors, ranging from properties of surrounding media to cell-cell communication and gene expression in individual cells. The biomechanical interactions arising from the growth and division of individual cells in confined environments are ubiquitous, yet little work has focused on this fundamental aspect of colony formation. By combining experimental observations of growing monolayers of non-motile strain of bacteria Escherichia coli in a shallow microfluidic chemostat with discrete-element simulations and continuous theory, we demonstrate that expansion of a dense colony leads to rapid orientational alignment of rod-like cells. However, in larger colonies, anisotropic compression may lead to buckling instability which breaks perfect nematic order. Furthermore, we found that in shallow cavities feedback between cell growth and mobility in a confined environment leads to a novel cell streaming instability. Joint work with W. Mather, D. Volfson, O. Mondrag'on-Palomino, T. Danino, S. Cookson, and J. Hasty (UCSD) and D. Boyer, S. Orozco-Fuentes (UNAM, Mexico).

  11. Synthesis of dense energetic materials. Annual report

    SciTech Connect

    Coon, C.

    1982-07-01

    The objective of the research described in the report is to synthesize new, dense, stable, highly energetic materials which will ultimately be a candidates for improved explosive and propellant formulations. Following strict guidelines pertaining to energy, density, stability, etc. Specific target molecules were chosen that appear to possess the improved properties desired for new energetic materials. This report summarizes research on the synthesis of these target materials from February 1981 to January 1982. The following compounds were synthesized: 5,5'-diamino-3,3'-bioxadiazole(1,2,4); 5,5'-bis(trichloromethyl)-3,3'-di(1,2,4-oxadiazole); 3,3'-bi(1,2,4-oxadiazole); ethylene tetranitramine (ETNA); N,N-bis(methoxymethyl)acetamide; N,N-bis(chloromethyl)acetamide; 7,8-dimethylglycoluril; Synthesis of 3,9-Di(t-butyl)-13,14-dimethyl-tetracyclo-(5,5,2,0/sup 5/ /sup 13/ 0/sup 11/ /sup 14/)-1,3,5,7,9,11-hexaaza-6,12-dioxotetradecane.

  12. Sticky Particles: Modeling Rigid Aggregates in Dense Planetary Rings

    NASA Astrophysics Data System (ADS)

    Perrine, Randall P.; Richardson, D. C.; Scheeres, D. J.

    2008-09-01

    We present progress on our study of planetary ring dynamics. We use local N-body simulations to examine small patches of dense rings in which self-gravity and mutual collisions dominate the dynamics of the ring material. We use the numerical code pkdgrav to model the motions of 105-7 ring particles, using a sliding patch model with modified periodic boundary conditions. The exact nature of planetary ring particles is not well understood. If covered in a frost-like layer, such irregular surfaces may allow for weak cohesion between colliding particles. Thus we have recently added new functionality to our model, allowing "sticky particles” to lock into rigid aggregates while in a rotating reference frame. This capability allows particles to adhere to one another, forming irregularly shaped aggregates that move as rigid bodies. (The bonds between particles can subsequently break, given sufficient stress.) These aggregates have greater strength than gravitationally bound "rubble piles,” and are thus able to grow larger and survive longer under similar stresses. This new functionality allows us to explore planetary ring properties and dynamics in a new way, by self-consistently forming (and destroying) non-spherical aggregates and moonlets via cohesive forces, while in a rotating frame, subjected to planetary tides. (We are not aware of any similar implementations in other existing models.) These improvements allow us to study the many effects that particle aggregation may have on the rings, such as overall ring structure; wake formation; equilibrium properties of non-spherical particles, like pitch angle, orientation, shape, size distribution, and spin; and the surface properties of the ring material. We present test cases and the latest results from this new model. This work is supported by a NASA Earth and Space Science Fellowship.

  13. Linear dependence of surface expansion speed on initial plasma temperature in warm dense matter

    DOE PAGESBeta

    Bang, Woosuk; Albright, Brian James; Bradley, Paul Andrew; Vold, Erik Lehman; Boettger, Jonathan Carl; Fernández, Juan Carlos

    2016-07-12

    Recent progress in laser-driven quasi-monoenergetic ion beams enabled the production of uniformly heated warm dense matter. Matter heated rapidly with this technique is under extreme temperatures and pressures, and promptly expands outward. While the expansion speed of an ideal plasma is known to have a square-root dependence on temperature, computer simulations presented here show a linear dependence of expansion speed on initial plasma temperature in the warm dense matter regime. The expansion of uniformly heated 1–100 eV solid density gold foils was modeled with the RAGE radiation-hydrodynamics code, and the average surface expansion speed was found to increase linearly withmore » temperature. The origin of this linear dependence is explained by comparing predictions from the SESAME equation-of-state tables with those from the ideal gas equation-of-state. In conclusion, these simulations offer useful insight into the expansion of warm dense matter and motivate the application of optical shadowgraphy for temperature measurement.« less

  14. Hot dense matter creation in short-pulse laser interaction with tamped foils

    SciTech Connect

    Chen, S; Pasley, J; Beg, F; Gregori, G; Evans, R G; Notley, M; Mackinnon, A; Glenzer, S; Hansen, S; King, J; Chung, H; Wilks, S; Stephens, R; Freeman, R; Weber, R; Saiz, E G; Khattak, F; Riley, D

    2006-08-15

    The possibility of producing hot dense matter has important applications for the understanding of transport processes in inertial confinement fusion (ICF) [1] and laboratory astrophysics experiments [2]. While the success of ICF requires the correct solution of a complex interaction between laser coupling, equation-of-state, and particle transport problems, the possibility of experimentally recreating conditions found during the ignition phase in a simplified geometry is extremely appealing. In this paper we will show that hot dense plasma conditions found during ICF ignition experiments can be reproduced by illuminating a tamped foil with a high intensity laser. We will show that temperatures on the order of kiloelectronvolts at solid densities can be achieved under controlled conditions during the experiment. Hydrodynamic tamping by surface coatings allows to reach higher density regimes by enabling the diagnosis of matter that has not yet begun to decompress, thus opening the possibility of directly investigating strongly coupled systems [3]. Our experimental diagnostics is based on K-shell spectroscopy coupled to x-ray imaging techniques. Such techniques have recently become prevalent in the diagnosis of hot dense matter [4]. By looking at the presence, and relative strengths, of lines associated with different ionization states, spectroscopy provides considerable insight into plasma conditions. At the same time, curved crystal imaging techniques allow for the spatial resolution of different regions of the target, both allowing for comparison of heating processes with the results of Particle-In-Cell (PIC) and hybrid simulation codes.

  15. Linear dependence of surface expansion speed on initial plasma temperature in warm dense matter

    NASA Astrophysics Data System (ADS)

    Bang, W.; Albright, B. J.; Bradley, P. A.; Vold, E. L.; Boettger, J. C.; Fernández, J. C.

    2016-07-01

    Recent progress in laser-driven quasi-monoenergetic ion beams enabled the production of uniformly heated warm dense matter. Matter heated rapidly with this technique is under extreme temperatures and pressures, and promptly expands outward. While the expansion speed of an ideal plasma is known to have a square-root dependence on temperature, computer simulations presented here show a linear dependence of expansion speed on initial plasma temperature in the warm dense matter regime. The expansion of uniformly heated 1–100 eV solid density gold foils was modeled with the RAGE radiation-hydrodynamics code, and the average surface expansion speed was found to increase linearly with temperature. The origin of this linear dependence is explained by comparing predictions from the SESAME equation-of-state tables with those from the ideal gas equation-of-state. These simulations offer useful insight into the expansion of warm dense matter and motivate the application of optical shadowgraphy for temperature measurement.

  16. Linear dependence of surface expansion speed on initial plasma temperature in warm dense matter

    PubMed Central

    Bang, W.; Albright, B. J.; Bradley, P. A.; Vold, E. L.; Boettger, J. C.; Fernández, J. C.

    2016-01-01

    Recent progress in laser-driven quasi-monoenergetic ion beams enabled the production of uniformly heated warm dense matter. Matter heated rapidly with this technique is under extreme temperatures and pressures, and promptly expands outward. While the expansion speed of an ideal plasma is known to have a square-root dependence on temperature, computer simulations presented here show a linear dependence of expansion speed on initial plasma temperature in the warm dense matter regime. The expansion of uniformly heated 1–100 eV solid density gold foils was modeled with the RAGE radiation-hydrodynamics code, and the average surface expansion speed was found to increase linearly with temperature. The origin of this linear dependence is explained by comparing predictions from the SESAME equation-of-state tables with those from the ideal gas equation-of-state. These simulations offer useful insight into the expansion of warm dense matter and motivate the application of optical shadowgraphy for temperature measurement. PMID:27405664

  17. X-ray and optical studies of dense plasmas

    NASA Astrophysics Data System (ADS)

    Ellwi, Samir Shakir

    X-ray and optical investigations of dense plasmas and x- ray sources for laser-plasma studies are presented in this thesis. Short pulse laser interaction with solids is reviewed. The transport of laser energy into the bulk of the target by electron thermal conduction, radiation and shock waves is described. X-ray characterisation of different types of plasma are presented. The first experiment deals with the generation of a plasma cathode x-ray source. The experimental results are compared with a simulation made using a simple self consistent model. The x-ray source size depends upon the cone angle of the tip of the anode. A wide range of experimental data for different parameters (anode-cathode separation, anode positive voltages, anode material, cathode material and different laser energies) is collected and analysed. In chapter 5 the equation of state of gold is studied using the shock wave reflection method. Experimental measurements are done for the direct and indirect drives. The experimental data are compared to the SESAME tabular data. Indirect drive is found to give a more accurate measurement compared to direct drive using the Phase Zone Plate (PZP) method technique. Preheating effects in laser driven shock waves is presented in chapter 6. We used two different diagnostics: the colour temperature measurements deduced by recording the target rear side emissivity in two spectral bands and the target rear side reflectivity measurements. We use the MULTI hydrodynamic code to measure the temperature of the preheat and in coupling with the Fresnel reflectivity model in order to compare the theoretical calculations to the experimental observations. Qualitative results of energy transport by hot fast electrons in solid cold and compressed plastic are presented in chapter 7. K-alpha emission from chlorine fluor buried layers is used to measure the fast electron transport. These data are collected from time integrated spectrometers using k-alpha spectroscopy of the

  18. Improving Discrete-Sensitivity-Based Approach for Practical Design Optimization

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay; Cordero, Yvette; Pandya, Mohagna J.

    1997-01-01

    In developing the automated methodologies for simulation-based optimal shape designs, their accuracy, efficiency and practicality are the defining factors to their success. To that end, four recent improvements to the building blocks of such a methodology, intended for more practical design optimization, have been reported. First, in addition to a polynomial-based parameterization, a partial differential equation (PDE) based parameterization was shown to be a practical tool for a number of reasons. Second, an alternative has been incorporated to one of the tedious phases of developing such a methodology, namely, the automatic differentiation of the computer code for the flow analysis in order to generate the sensitivities. Third, by extending the methodology for the thin-layer Navier-Stokes (TLNS) based flow simulations, the more accurate flow physics was made available. However, the computer storage requirement for a shape optimization of a practical configuration with the -fidelity simulations (TLNS and dense-grid based simulations), required substantial computational resources. Therefore, the final improvement reported herein responded to this point by including the alternating-direct-implicit (ADI) based system solver as an alternative to the preconditioned biconjugate (PbCG) and other direct solvers.

  19. Software Certification - Coding, Code, and Coders

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Holzmann, Gerard J.

    2011-01-01

    We describe a certification approach for software development that has been adopted at our organization. JPL develops robotic spacecraft for the exploration of the solar system. The flight software that controls these spacecraft is considered to be mission critical. We argue that the goal of a software certification process cannot be the development of "perfect" software, i.e., software that can be formally proven to be correct under all imaginable and unimaginable circumstances. More realistically, the goal is to guarantee a software development process that is conducted by knowledgeable engineers, who follow generally accepted procedures to control known risks, while meeting agreed upon standards of workmanship. We target three specific issues that must be addressed in such a certification procedure: the coding process, the code that is developed, and the skills of the coders. The coding process is driven by standards (e.g., a coding standard) and tools. The code is mechanically checked against the standard with the help of state-of-the-art static source code analyzers. The coders, finally, are certified in on-site training courses that include formal exams.

  20. Generalized spatiotemporal myocardial strain analysis for DENSE and SPAMM imaging.

    PubMed

    Young, Alistair A; Li, Bo; Kirton, Robert S; Cowan, Brett R

    2012-06-01

    Displacement encoding using stimulated echoes (DENSE) and spatial modulation of magnetization (SPAMM) are MRI techniques for quantifying myocardial displacement and strain. However, DENSE has not been compared against SPAMM in phantoms exhibiting nonhomogeneous strain, and interobserver variability has not been compared between DENSE and SPAMM. To perform these comparisons, there is a need for a generalized analysis framework for the evaluation of myocardial strain. A spatiotemporal mathematical model was used to represent myocardial geometry and motion. The model was warped to each frame using tissue displacement maps calculated from either automated phase unwrapping (DENSE) or nonrigid registration (SPAMM). Strain and motion were then calculated from the model using standard methods. DENSE and SPAMM results were compared in a deformable gel phantom exhibiting known nonhomogeneous strain, and interobserver errors were determined in 19 healthy human volunteers. Nonhomogeneous strain in the phantom was accurately quantified using both DENSE and SPAMM. In the healthy volunteers, DENSE produced better interobserver errors than SPAMM for radial strain (-0.009 ± 0.069 vs. 0.029 ± 0.152, respectively, bias ± 95% confidence interval). In conclusion, generalized spatiotemporal modeling enables robust myocardial strain analysis for DENSE or SPAMM.

  1. The chemistry of phosphorus in dense interstellar clouds

    NASA Technical Reports Server (NTRS)

    Thorne, L. R.; Anicich, V. G.; Prasad, S. S.; Huntress, W. T., Jr.

    1984-01-01

    Laboratory experiments show that the ion-molecule chemistry of phosphorus is significantly different from that of nitrogen in dense interstellar clouds. The PH3 molecule is not readily formed by gas-phase, ion-molecule reactions in these regions. Laboratory results used in a simple kinetic model indicate that the most abundant molecule containing phosphorus in dense clouds is PO.

  2. Mining connected global and local dense subgraphs for bigdata

    NASA Astrophysics Data System (ADS)

    Wu, Bo; Shen, Haiying

    2016-01-01

    The problem of discovering connected dense subgraphs of natural graphs is important in data analysis. Discovering dense subgraphs that do not contain denser subgraphs or are not contained in denser subgraphs (called significant dense subgraphs) is also critical for wide-ranging applications. In spite of many works on discovering dense subgraphs, there are no algorithms that can guarantee the connectivity of the returned subgraphs or discover significant dense subgraphs. Hence, in this paper, we define two subgraph discovery problems to discover connected and significant dense subgraphs, propose polynomial-time algorithms and theoretically prove their validity. We also propose an algorithm to further improve the time and space efficiency of our basic algorithm for discovering significant dense subgraphs in big data by taking advantage of the unique features of large natural graphs. In the experiments, we use massive natural graphs to evaluate our algorithms in comparison with previous algorithms. The experimental results show the effectiveness of our algorithms for the two problems and their efficiency. This work is also the first that reveals the physical significance of significant dense subgraphs in natural graphs from different domains.

  3. Coding for Electronic Mail

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Lee, J. J.

    1986-01-01

    Scheme for coding facsimile messages promises to reduce data transmission requirements to one-tenth current level. Coding scheme paves way for true electronic mail in which handwritten, typed, or printed messages or diagrams sent virtually instantaneously - between buildings or between continents. Scheme, called Universal System for Efficient Electronic Mail (USEEM), uses unsupervised character recognition and adaptive noiseless coding of text. Image quality of resulting delivered messages improved over messages transmitted by conventional coding. Coding scheme compatible with direct-entry electronic mail as well as facsimile reproduction. Text transmitted in this scheme automatically translated to word-processor form.

  4. Two Rab2 interactors regulate dense-core vesicle maturation.

    PubMed

    Ailion, Michael; Hannemann, Mandy; Dalton, Susan; Pappas, Andrea; Watanabe, Shigeki; Hegermann, Jan; Liu, Qiang; Han, Hsiao-Fen; Gu, Mingyu; Goulding, Morgan Q; Sasidharan, Nikhil; Schuske, Kim; Hullett, Patrick; Eimer, Stefan; Jorgensen, Erik M

    2014-04-01

    Peptide neuromodulators are released from a unique organelle: the dense-core vesicle. Dense-core vesicles are generated at the trans-Golgi and then sort cargo during maturation before being secreted. To identify proteins that act in this pathway, we performed a genetic screen in Caenorhabditis elegans for mutants defective in dense-core vesicle function. We identified two conserved Rab2-binding proteins: RUND-1, a RUN domain protein, and CCCP-1, a coiled-coil protein. RUND-1 and CCCP-1 colocalize with RAB-2 at the Golgi, and rab-2, rund-1, and cccp-1 mutants have similar defects in sorting soluble and transmembrane dense-core vesicle cargos. RUND-1 also interacts with the Rab2 GAP protein TBC-8 and the BAR domain protein RIC-19, a RAB-2 effector. In summary, a pathway of conserved proteins controls the maturation of dense-core vesicles at the trans-Golgi network. PMID:24698274

  5. XSOR codes users manual

    SciTech Connect

    Jow, Hong-Nian; Murfin, W.B.; Johnson, J.D.

    1993-11-01

    This report describes the source term estimation codes, XSORs. The codes are written for three pressurized water reactors (Surry, Sequoyah, and Zion) and two boiling water reactors (Peach Bottom and Grand Gulf). The ensemble of codes has been named ``XSOR``. The purpose of XSOR codes is to estimate the source terms which would be released to the atmosphere in severe accidents. A source term includes the release fractions of several radionuclide groups, the timing and duration of releases, the rates of energy release, and the elevation of releases. The codes have been developed by Sandia National Laboratories for the US Nuclear Regulatory Commission (NRC) in support of the NUREG-1150 program. The XSOR codes are fast running parametric codes and are used as surrogates for detailed mechanistic codes. The XSOR codes also provide the capability to explore the phenomena and their uncertainty which are not currently modeled by the mechanistic codes. The uncertainty distributions of input parameters may be used by an. XSOR code to estimate the uncertainty of source terms.

  6. DLLExternalCode

    SciTech Connect

    Greg Flach, Frank Smith

    2014-05-14

    DLLExternalCode is the a general dynamic-link library (DLL) interface for linking GoldSim (www.goldsim.com) with external codes. The overall concept is to use GoldSim as top level modeling software with interfaces to external codes for specific calculations. The DLLExternalCode DLL that performs the linking function is designed to take a list of code inputs from GoldSim, create an input file for the external application, run the external code, and return a list of outputs, read from files created by the external application, back to GoldSim. Instructions for creating the input file, running the external code, and reading the output are contained in an instructions file that is read and interpreted by the DLL.

  7. DLLExternalCode

    2014-05-14

    DLLExternalCode is the a general dynamic-link library (DLL) interface for linking GoldSim (www.goldsim.com) with external codes. The overall concept is to use GoldSim as top level modeling software with interfaces to external codes for specific calculations. The DLLExternalCode DLL that performs the linking function is designed to take a list of code inputs from GoldSim, create an input file for the external application, run the external code, and return a list of outputs, read frommore » files created by the external application, back to GoldSim. Instructions for creating the input file, running the external code, and reading the output are contained in an instructions file that is read and interpreted by the DLL.« less

  8. Parafermion stabilizer codes

    NASA Astrophysics Data System (ADS)

    Gungordu, Utkan; Nepal, Rabindra; Kovalev, Alexey

    2015-03-01

    We define and study parafermion stabilizer codes [Phys. Rev. A 90, 042326 (2014)] which can be viewed as generalizations of Kitaev's one dimensional model of unpaired Majorana fermions. Parafermion stabilizer codes can protect against low-weight errors acting on a small subset of parafermion modes in analogy to qudit stabilizer codes. Examples of several smallest parafermion stabilizer codes are given. Our results show that parafermions can achieve a better encoding rate than Majorana fermions. A locality preserving embedding of qudit operators into parafermion operators is established which allows one to map known qudit stabilizer codes to parafermion codes. We also present a local 2D parafermion construction that combines topological protection of Kitaev's toric code with additional protection relying on parity conservation. This work was supported in part by the NSF under Grants No. Phy-1415600 and No. NSF-EPSCoR 1004094.

  9. Experimentally validated 3-D simulation of shock waves generated by dense explosives in confined complex geometries.

    PubMed

    Rigas, Fotis; Sklavounos, Spyros

    2005-05-20

    Accidental blast wave generation and propagation in the surroundings poses severe threats for people and property. The prediction of overpressure maxima and its change with time at specified distances can lead to useful conclusions in quantitative risk analysis applications. In this paper, the use of a computational fluid dynamics (CFD) code CFX-5.6 on dense explosive detonation events is described. The work deals with the three-dimensional simulation of overpressure wave propagation generated by the detonation of a dense explosive within a small-scale branched tunnel. It also aids at validating the code against published experimental data as well as to study the way that the resulting shock wave propagates in a confined space configuration. Predicted overpressure histories were plotted and compared versus experimental measurements showing a reasonably good agreement. Overpressure maxima and corresponding times were found close to the measured ones confirming that CFDs may constitute a useful tool in explosion hazard assessment procedures. Moreover, it was found that blast wave propagates preserving supersonic speed along the tunnel accompanied by high overpressure levels, and indicating that space confinement favors the formation and maintenance of a shock rather than a weak pressure wave. PMID:15885402

  10. Dust charging in the dense Enceladus torus

    NASA Astrophysics Data System (ADS)

    Yaroshenko, Victoria; Lühr, Hermann; Morfill, Gregor

    2013-04-01

    The key parameter of the dust-plasma interactions is the charge carried by a dust particle. The grain electrostatic potential is usually calculated from the so called orbit-motion limited (OML) model [1]. It is valid for a single particle immersed into collisionless plasmas with Maxwellian electron and ion distributions. Apparently, such a parameter regime cannot be directly applied to the conditions relevant for the Enceladus dense neutral torus and plume, where the plasma is multispecies and multistreaming, the dust density is high, sometimes even exceeding the plasma number density. We have examined several new factors which can significantly affect the grain charging in the dust loaded plasma of the Enceladus torus and in the plume region and which, to our knowledge, have not been investigated up to now for such plasma environments. These include: (a) influence of the multispecies plasma composition, namely the presence of two electron populations with electron temperatures ranging from a few eV up to a hundred eV [2], a few ion species (e.g. corotating water group ions, and protons, characterized by different kinetic temperatures), as well as cold nonthermalized new-born water group ions which move with Kepler velocity [3]; (b) effect of the ion-neutral collisions on the dust charging in the dense Enceladus torus and in the plume; (c) effect of high dust density, when a grain cannot be considered as an isolated particle any more (especially relevant for the plume region, where the average negative dust charge density according to Cassini measurements is of the order or even exceeds the plasma number density [4,5]). It turns out that in this case, the electrostatic potential and respective dust charge cannot be deduced from the initial OML formalism and there is a need to incorporate the effect of dust density into plasma fluxes flowing to the grain surface to calculate the grain equilibrium charge; (e) since the dust in the planetary rings comes in a wide

  11. SUPPORTED DENSE CERAMIC MEMBRANES FOR OXYGEN SEPARATION

    SciTech Connect

    Timothy L. Ward

    2003-03-01

    This project addresses the need for reliable fabrication methods of supported thin/thick dense ceramic membranes for oxygen separation. Some ceramic materials that possess mixed conductivity (electronic and ionic) at high temperature have the potential to permeate oxygen with perfect selectivity, making them very attractive for oxygen separation and membrane reactor applications. In order to maximize permeation rates at the lowest possible temperatures, it is desirable to minimize diffusional limitations within the ceramic by reducing the thickness of the ceramic membrane, preferably to thicknesses of 10 {micro}m or thinner. It has proven to be very challenging to reliably fabricate dense, defect-free ceramic membrane layers of such thickness. In this project we are investigating the use of ultrafine SrCo{sub 0.5}FeO{sub x} (SCFO) powders produced by aerosol pyrolysis to fabricate such supported membranes. SrCo{sub 0.5}FeO{sub x} is a ceramic composition that has been shown to have desirable oxygen permeability, as well as good chemical stability in the reducing environments that are encountered in some important applications. Our approach is to use a doctor blade procedure to deposit pastes prepared from the aerosol-derived SCFO powders onto porous SCFO supports. We have previously shown that membrane layers deposited from the aerosol powders can be sintered to high density without densification of the underlying support. However, these membrane layers contained large-scale cracks and open areas, making them unacceptable for membrane purposes. In the past year, we have refined the paste formulations based on guidance from the ceramic tape casting literature. We have identified a multicomponent organic formulation utilizing castor oil as dispersant in a solvent of mineral spirits and isopropanol. Other additives were polyvinylbutyral as binder and dibutylphthalate as plasticizer. The nonaqueous formulation has superior wetting properties with the powder, and

  12. The chemistry of dense interstellar clouds

    NASA Technical Reports Server (NTRS)

    Irvine, W. M.

    1991-01-01

    The basic theme of this program is the study of molecular complexity and evolution in interstellar and circumstellar clouds incorporating the biogenic elements. Recent results include the identification of a new astronomical carbon-chain molecule, C4Si. This species was detected in the envelope expelled from the evolved star IRC+10216 in observations at the Nobeyama Radio Observatory in Japan. C4Si is the carrier of six unidentified lines which had previously been observed. This detection reveals the existence of a new series of carbon-chain molecules, C sub n Si (n equals 1, 2, 4). Such molecules may well be formed from the reaction of Si(+) with acetylene and acetylene derivatives. Other recent research has concentrated on the chemical composition of the cold, dark interstellar clouds, the nearest dense molecular clouds to the solar system. Such regions have very low kinetic temperatures, on the order of 10 K, and are known to be formation sites for solar-type stars. We have recently identified for the first time in such regions the species of H2S, NO, HCOOH (formic acid). The H2S abundance appears to exceed that predicted by gas-phase models of ion-molecule chemistry, perhaps suggesting the importance of synthesis on grain surfaces. Additional observations in dark clouds have studied the ratio of ortho- to para-thioformaldehyde. Since this ratio is expected to be unaffected by both radiative and ordinary collisional processes in the cloud, it may well reflect the formation conditions for this molecule. The ratio is observed to depart from that expected under conditions of chemical equilibrium at formation, perhaps reflecting efficient interchange between cold dust grains in the gas phase.

  13. Model For Dense Molecular Cloud Cores

    NASA Technical Reports Server (NTRS)

    Doty, Steven D.; Neufeld, David A.

    1997-01-01

    We present a detailed theoretical model for the thermal balance, chemistry, and radiative transfer within quiescent dense molecular cloud cores that contain a central protostar. In the interior of such cores, we expect the dust and gas temperatures to be well coupled, while in the outer regions CO rotational emissions dominate the gas cooling and the predicted gas temperature lies significantly below the dust temperature. Large spatial variations in the gas temperature are expected to affect the gas phase chemistry dramatically; in particular, the predicted water abundance varies by more than a factor of 1000 within cloud cores that contain luminous protostars. Based upon our predictions for the thermal and chemical structure of cloud cores, we have constructed self-consistent radiative transfer models to compute the line strengths and line profiles for transitions of (12)CO, (13)CO, C(18)O, ortho- and para-H2(16)O, ortho- and para-H2(18)O, and O I. We carried out a general parameter study to determine the dependence of the model predictions upon the parameters assumed for the source. We expect many of the far-infrared and submillimeter rotational transitions of water to be detectable either in emission or absorption with the use of the Infrared Space Observatory (ISO) and the Submillimeter Wave Astronomy Satellite. Quiescent, radiatively heated hot cores are expected to show low-gain maser emission in the 183 GHz 3(sub 13)-2(sub 20) water line, such as has been observed toward several hot core regions using ground-based telescopes. We predict the (3)P(sub l) - (3)P(sub 2) fine-structure transition of atomic oxygen near 63 micron to be in strong absorption against the continuum for many sources. Our model can also account successfully for recent ISO observations of absorption in rovibrational transitions of water toward the source AFGL 2591.

  14. Upper and lower bounds on quantum codes

    NASA Astrophysics Data System (ADS)

    Smith, Graeme Stewart Baird

    This thesis provides bounds on the performance of quantum error correcting codes when used for quantum communication and quantum key distribution. The first two chapters provide a bare-bones introduction to classical and quantum error correcting codes, respectively. The next four chapters present achievable rates for quantum codes in various scenarios. The final chapter is dedicated to an upper bound on the quantum channel capacity. Chapter 3 studies coding for adversarial noise using quantum list codes, showing there exist quantum codes with high rates and short lists. These can be used, together with a very short secret key, to communicate with high fidelity at noise levels for which perfect fidelity is, impossible. Chapter 4 explores the performance of a family of degenerate codes when used to communicate over Pauli channels, showing they can be used to communicate over almost any Pauli channel at rates that are impossible for a nondegenerate code and that exceed those of previously known degenerate codes. By studying the scaling of the optimal block length as a function of the channel's parameters, we develop a heuristic for designing even better codes. Chapter 5 describes an equivalence between a family of noisy preprocessing protocols for quantum key distribution and entanglement distillation protocols whose target state belongs to a class of private states called "twisted states." In Chapter 6, the codes of Chapter 4 are combined with the protocols of Chapter 5 to provide higher key rates for one-way quantum key distribution than were previously thought possible. Finally, Chapter 7 presents a new upper bound on the quantum channel capacity that is both additive and convex, and which can be interpreted as the capacity of the channel for communication given access to side channels from a class of zero capacity "cloning" channels. This "clone assisted capacity" is equal to the unassisted capacity for channels that are degradable, which we use to find new upper

  15. Study of components and statistical reaction mechanism in simulation of nuclear process for optimized production of {sup 64}Cu and {sup 67}Ga medical radioisotopes using TALYS, EMPIRE and LISE++ nuclear reaction and evaporation codes

    SciTech Connect

    Nasrabadi, M. N. Sepiani, M.

    2015-03-30

    Production of medical radioisotopes is one of the most important tasks in the field of nuclear technology. These radioactive isotopes are mainly produced through variety nuclear process. In this research, excitation functions and nuclear reaction mechanisms are studied for simulation of production of these radioisotopes in the TALYS, EMPIRE and LISE++ reaction codes, then parameters and different models of nuclear level density as one of the most important components in statistical reaction models are adjusted for optimum production of desired radioactive yields.

  16. Coding and transmission of subband coded images on the Internet

    NASA Astrophysics Data System (ADS)

    Wah, Benjamin W.; Su, Xiao

    2001-09-01

    Subband-coded images can be transmitted in the Internet using either the TCP or the UDP protocol. Delivery by TCP gives superior decoding quality but with very long delays when the network is unreliable, whereas delivery by UDP has negligible delays but with degraded quality when packets are lost. Although images are delivered currently over the Internet by TCP, we study in this paper the use of UDP to deliver multi-description reconstruction-based subband-coded images. First, in order to facilitate recovery from UDP packet losses, we propose a joint sender-receiver approach for designing optimized reconstruction-based subband transform (ORB-ST) in multi-description coding (MDC). Second, we carefully evaluate the delay-quality trade-offs between the TCP delivery of SDC images and the UDP and combined TCP/UDP delivery of MDC images. Experimental results show that our proposed ORB-ST performs well in real Internet tests, and UDP and combined TCP/UDP delivery of MDC images provide a range of attractive alternatives to TCP delivery.

  17. Technique for code augmentation. Memorandum report

    SciTech Connect

    Fickie, K.D.; Grosh, J.

    1987-10-01

    A simple method for calling pre-existing computer codes from inside another program is described. Three applications drawn from the field of interior ballistics are included as examples. Two of the cases are optimization problems and the other is a simple search for a constraint condition. More elaborate applications in the area of computer-aided design are discussed.

  18. Triggering Collapse of the Presolar Dense Cloud Core and Injecting Short-lived Radioisotopes with a Shock Wave. III. Rotating Three-dimensional Cloud Cores

    NASA Astrophysics Data System (ADS)

    Boss, Alan P.; Keiser, Sandra A.

    2014-06-01

    A key test of the supernova triggering and injection hypothesis for the origin of the solar system's short-lived radioisotopes is to reproduce the inferred initial abundances of these isotopes. We present here the most detailed models to date of the shock wave triggering and injection process, where shock waves with varied properties strike fully three-dimensional, rotating, dense cloud cores. The models are calculated with the FLASH adaptive mesh hydrodynamics code. Three different outcomes can result: triggered collapse leading to fragmentation into a multiple protostar system; triggered collapse leading to a single protostar embedded in a protostellar disk; or failure to undergo dynamic collapse. Shock wave material is injected into the collapsing clouds through Rayleigh-Taylor fingers, resulting in initially inhomogeneous distributions in the protostars and protostellar disks. Cloud rotation about an axis aligned with the shock propagation direction does not increase the injection efficiency appreciably, as the shock parameters were chosen to be optimal for injection even in the absence of rotation. For a shock wave from a core-collapse supernova, the dilution factors for supernova material are in the range of ~10-4 to ~3 × 10-4, in agreement with recent laboratory estimates of the required amount of dilution for 60Fe and 26Al. We conclude that a type II supernova remains as a promising candidate for synthesizing the solar system's short-lived radioisotopes shortly before their injection into the presolar cloud core by the supernova's remnant shock wave.

  19. Maximally dense packings of two-dimensional convex and concave noncircular particles.

    PubMed

    Atkinson, Steven; Jiao, Yang; Torquato, Salvatore

    2012-09-01

    Dense packings of hard particles have important applications in many fields, including condensed matter physics, discrete geometry, and cell biology. In this paper, we employ a stochastic search implementation of the Torquato-Jiao adaptive-shrinking-cell (ASC) optimization scheme [Nature (London) 460, 876 (2009)] to find maximally dense particle packings in d-dimensional Euclidean space R(d). While the original implementation was designed to study spheres and convex polyhedra in d≥3, our implementation focuses on d=2 and extends the algorithm to include both concave polygons and certain complex convex or concave nonpolygonal particle shapes. We verify the robustness of this packing protocol by successfully reproducing the known putative optimal packings of congruent copies of regular pentagons and octagons, then employ it to suggest dense packing arrangements of congruent copies of certain families of concave crosses, convex and concave curved triangles (incorporating shapes resembling the Mercedes-Benz logo), and "moonlike" shapes. Analytical constructions are determined subsequently to obtain the densest known packings of these particle shapes. For the examples considered, we find that the densest packings of both convex and concave particles with central symmetry are achieved by their corresponding optimal Bravais lattice packings; for particles lacking central symmetry, the densest packings obtained are nonlattice periodic packings, which are consistent with recently-proposed general organizing principles for hard particles. Moreover, we find that the densest known packings of certain curved triangles are periodic with a four-particle basis, and we find that the densest known periodic packings of certain moonlike shapes possess no inherent symmetries. Our work adds to the growing evidence that particle shape can be used as a tuning parameter to achieve a diversity of packing structures.

  20. Maximally dense packings of two-dimensional convex and concave noncircular particles

    NASA Astrophysics Data System (ADS)

    Atkinson, Steven; Jiao, Yang; Torquato, Salvatore

    2012-09-01

    Dense packings of hard particles have important applications in many fields, including condensed matter physics, discrete geometry, and cell biology. In this paper, we employ a stochastic search implementation of the Torquato-Jiao adaptive-shrinking-cell (ASC) optimization scheme [Nature (London)NATUAS0028-083610.1038/nature08239 460, 876 (2009)] to find maximally dense particle packings in d-dimensional Euclidean space Rd. While the original implementation was designed to study spheres and convex polyhedra in d≥3, our implementation focuses on d=2 and extends the algorithm to include both concave polygons and certain complex convex or concave nonpolygonal particle shapes. We verify the robustness of this packing protocol by successfully reproducing the known putative optimal packings of congruent copies of regular pentagons and octagons, then employ it to suggest dense packing arrangements of congruent copies of certain families of concave crosses, convex and concave curved triangles (incorporating shapes resembling the Mercedes-Benz logo), and “moonlike” shapes. Analytical constructions are determined subsequently to obtain the densest known packings of these particle shapes. For the examples considered, we find that the densest packings of both convex and concave particles with central symmetry are achieved by their corresponding optimal Bravais lattice packings; for particles lacking central symmetry, the densest packings obtained are nonlattice periodic packings, which are consistent with recently-proposed general organizing principles for hard particles. Moreover, we find that the densest known packings of certain curved triangles are periodic with a four-particle basis, and we find that the densest known periodic packings of certain moonlike shapes possess no inherent symmetries. Our work adds to the growing evidence that particle shape can be used as a tuning parameter to achieve a diversity of packing structures.

  1. Maximally dense packings of two-dimensional convex and concave noncircular particles.

    PubMed

    Atkinson, Steven; Jiao, Yang; Torquato, Salvatore

    2012-09-01

    Dense packings of hard particles have important applications in many fields, including condensed matter physics, discrete geometry, and cell biology. In this paper, we employ a stochastic search implementation of the Torquato-Jiao adaptive-shrinking-cell (ASC) optimization scheme [Nature (London) 460, 876 (2009)] to find maximally dense particle packings in d-dimensional Euclidean space R(d). While the original implementation was designed to study spheres and convex polyhedra in d≥3, our implementation focuses on d=2 and extends the algorithm to include both concave polygons and certain complex convex or concave nonpolygonal particle shapes. We verify the robustness of this packing protocol by successfully reproducing the known putative optimal packings of congruent copies of regular pentagons and octagons, then employ it to suggest dense packing arrangements of congruent copies of certain families of concave crosses, convex and concave curved triangles (incorporating shapes resembling the Mercedes-Benz logo), and "moonlike" shapes. Analytical constructions are determined subsequently to obtain the densest known packings of these particle shapes. For the examples considered, we find that the densest packings of both convex and concave particles with central symmetry are achieved by their corresponding optimal Bravais lattice packings; for particles lacking central symmetry, the densest packings obtained are nonlattice periodic packings, which are consistent with recently-proposed general organizing principles for hard particles. Moreover, we find that the densest known packings of certain curved triangles are periodic with a four-particle basis, and we find that the densest known periodic packings of certain moonlike shapes possess no inherent symmetries. Our work adds to the growing evidence that particle shape can be used as a tuning parameter to achieve a diversity of packing structures. PMID:23030907

  2. Dense-gas dispersion advection-diffusion model

    SciTech Connect

    Ermak, D.L.

    1992-07-01

    A dense-gas version of the ADPIC particle-in-cell, advection- diffusion model was developed to simulate the atmospheric dispersion of denser-than-air releases. In developing the model, it was assumed that the dense-gas effects could be described in terms of the vertically-averaged thermodynamic properties and the local height of the cloud. The dense-gas effects were treated as a perturbation to the ambient thermodynamic properties (density and temperature), ground level heat flux, turbulence level (diffusivity), and windfield (gravity flow) within the local region of the dense-gas cloud. These perturbations were calculated from conservation of energy and conservation of momentum principles along with the ideal gas law equation of state for a mixture of gases. ADPIC, which is generally run in conjunction with a mass-conserving wind flow model to provide the advection field, contains all the dense-gas modifications within it. This feature provides the versatility of coupling the new dense-gas ADPIC with alternative wind flow models. The new dense-gas ADPIC has been used to simulate the atmospheric dispersion of ground-level, colder-than-ambient, denser-than-air releases and has compared favorably with the results of field-scale experiments.

  3. Industrial Code Development

    NASA Technical Reports Server (NTRS)

    Shapiro, Wilbur

    1991-01-01

    The industrial codes will consist of modules of 2-D and simplified 2-D or 1-D codes, intended for expeditious parametric studies, analysis, and design of a wide variety of seals. Integration into a unified system is accomplished by the industrial Knowledge Based System (KBS), which will also provide user friendly interaction, contact sensitive and hypertext help, design guidance, and an expandable database. The types of analysis to be included with the industrial codes are interfacial performance (leakage, load, stiffness, friction losses, etc.), thermoelastic distortions, and dynamic response to rotor excursions. The first three codes to be completed and which are presently being incorporated into the KBS are the incompressible cylindrical code, ICYL, and the compressible cylindrical code, GCYL.

  4. Updating the Read Codes

    PubMed Central

    Robinson, David; Comp, Dip; Schulz, Erich; Brown, Philip; Price, Colin

    1997-01-01

    Abstract The Read Codes are a hierarchically-arranged controlled clinical vocabulary introduced in the early 1980s and now consisting of three maintained versions of differing complexity. The code sets are dynamic, and are updated quarterly in response to requests from users including clinicians in both primary and secondary care, software suppliers, and advice from a network of specialist healthcare professionals. The codes' continual evolution of content, both across and within versions, highlights tensions between different users and uses of coded clinical data. Internal processes, external interactions and new structural features implemented by the NHS Centre for Coding and Classification (NHSCCC) for user interactive maintenance of the Read Codes are described, and over 2000 items of user feedback episodes received over a 15-month period are analysed. PMID:9391934

  5. Mechanical code comparator

    DOEpatents

    Peter, Frank J.; Dalton, Larry J.; Plummer, David W.

    2002-01-01

    A new class of mechanical code comparators is described which have broad potential for application in safety, surety, and security applications. These devices can be implemented as micro-scale electromechanical systems that isolate a secure or otherwise controlled device until an access code is entered. This access code is converted into a series of mechanical inputs to the mechanical code comparator, which compares the access code to a pre-input combination, entered previously into the mechanical code comparator by an operator at the system security control point. These devices provide extremely high levels of robust security. Being totally mechanical in operation, an access control system properly based on such devices cannot be circumvented by software attack alone.

  6. Generating code adapted for interlinking legacy scalar code and extended vector code

    DOEpatents

    Gschwind, Michael K

    2013-06-04

    Mechanisms for intermixing code are provided. Source code is received for compilation using an extended Application Binary Interface (ABI) that extends a legacy ABI and uses a different register configuration than the legacy ABI. First compiled code is generated based on the source code, the first compiled code comprising code for accommodating the difference in register configurations used by the extended ABI and the legacy ABI. The first compiled code and second compiled code are intermixed to generate intermixed code, the second compiled code being compiled code that uses the legacy ABI. The intermixed code comprises at least one call instruction that is one of a call from the first compiled code to the second compiled code or a call from the second compiled code to the first compiled code. The code for accommodating the difference in register configurations is associated with the at least one call instruction.

  7. Flow distortion at a dense forest edge

    NASA Astrophysics Data System (ADS)

    Dellwik, E.; Mann, J.

    2012-12-01

    Results from a forest edge experiment with two masts and one horizontally pointed wind lidar are presented. The experiment was performed at a dense beech forest edge of the Tromnæs forest, which is a 24m tall mature beech forest on the island Falster, Denmark. The topography at the site is flat. The masts were placed approximately 1.5 canopy heights upwind and downwind of the edge and are two canopy heights tall. We present data showing how the forest edge distorts the flow when the flow is perpendicular to the edge and towards the forest during near-neutral atmospheric stratification. Despite that the wind gradient above the canopy is similar before and after the edge, the momentum flux is strongly reduced above the canopy. This result is especially pronounced during summer and high leaf area index, when the momentum flux was slightly positive 1.2 canopy heights above ground level. This is contrary to the results by standard Reynolds' averaged Navier Stokes models that predict an overshoot of the momentum flux. Further above the forest, the total amount of turbulent kinetic energy remained constant compared to the upwind measurements. A reduction of the vertical variance of the flow was largely compensated by an increase in the lateral variance, whereas the streamwise variance remained approximately constant. This result is in contrast to the predictions by homogeneous rapid distortion theory. We apply and develop an alternative framework based on inhomogeneous rapid distortion theory in combination with the turbulence model by Mann (1994), which can predict the observed changes of the flow. The inhomogeneous rapid distortion theory takes the blocking of the flow by the top of the canopy into account. This effect turns out to suppress the vertical momentum flux drastically and redistribute the vertical fluctuations into the lateral direction. We show one- and two-point spectra for verification of the model. The results are relevant for understanding the on

  8. SUPPORTED DENSE CERAMIC MEMBRANES FOR OXYGEN SEPARATION

    SciTech Connect

    Timothy L. Ward

    2000-06-30

    . This successfully reduced cracking, however the films retained open porosity. The investigation of this concept will be continued in the final year of the project. Investigation of a metal organic chemical vapor deposition (MOCVD) method for defect mending in dense membranes was also initiated. An appropriate metal organic precursor (iron tetramethylheptanedionate) was identified whose deposition can be controlled by access to oxygen at temperatures in the 280-300 C range. Initial experiments have deposited iron oxide, but only on the membrane surface; thus refinement of this method will continue.

  9. Mixtures in the Warm, Dense Matter Regine

    NASA Astrophysics Data System (ADS)

    Collins, Lee A.

    2009-03-01

    The bulk of normal matter from planets to the intergalactic medium exists as a composite of various elemental constituents. The interactions among these different species determine the basic properties of such diverse environments. For dilute systems, simple gas laws serve well to describe the mixing. However, once the density and temperature increase, more sophisticated treatments of the electronic component and dynamics become necessary. For the warm, dense matter (WDM) region [10^22-10^25 atoms/cm^3 and 300K - 10^6 K], quantum Monte Carlo and molecular dynamics, utilizing finite-temperature density functional theory (DFT), have served as the basic exploratory tools and benchmarks for other methods. The computational intensity of both methods, especially for mixtures, which require large sample sizes to attain statistical accuracy, has focused considerable attention on mixing prescriptions based on the properties of the pure atomic constituents. Though extensively utilized in many disciplines, these rules have received very little verification [1,2]. We examine the validity of two such rules, density and pressure mixing, for several systems and concentrations by comparing against quantum calculations for the fully-interacting composite. We find considerable differences in some regimes, especially for optical properties. We also probe dynamical properties such as diffusion and viscosity as well as the role of impurities. Finally, as a means of extending DFT results to higher temperature regimes, we also study orbital-free molecular dynamics (OFMD) approaches [3] based on various approximations to the basic density functional. These OFMD schemes permit a smooth transition from the WDM region to simpler one-component plasma and ideal gas models. Research in collaboration with J.D. Kress (LANL), D.A. Horner (LANL), and Flavien Lambert (CEA). [4pt] [1] D.A. Horner, J.D. Kress, and L.A. Collins, Phys. Rev. B 77, 064102 (2008).[0pt] [2] F. Lambert et. al. Phys. Rev. E

  10. Coded continuous wave meteor radar

    NASA Astrophysics Data System (ADS)

    Vierinen, J.; Chau, J. L.; Pfeffer, N.; Clahsen, M.; Stober, G.

    2015-07-01

    The concept of coded continuous wave meteor radar is introduced. The radar uses a continuously transmitted pseudo-random waveform, which has several advantages: coding avoids range aliased echoes, which are often seen with commonly used pulsed specular meteor radars (SMRs); continuous transmissions maximize pulse compression gain, allowing operation with significantly lower peak transmit power; the temporal resolution can be changed after performing a measurement, as it does not depend on pulse spacing; and the low signal to noise ratio allows multiple geographically separated transmitters to be used in the same frequency band without significantly interfering with each other. The latter allows the same receiver antennas to be used to receive multiple transmitters. The principles of the signal processing are discussed, in addition to discussion of several practical ways to increase computation speed, and how to optimally detect meteor echoes. Measurements from a campaign performed with a coded continuous wave SMR are shown and compared with two standard pulsed SMR measurements. The type of meteor radar described in this paper would be suited for use in a large scale multi-static network of meteor radar transmitters and receivers. This would, for example, provide higher spatio-temporal resolution for mesospheric wind field measurements.

  11. Combustion of dense streams of coal particles

    SciTech Connect

    Annamalai, K.

    1991-01-01

    The main objective of our work is to obtain a specific velocity of the resulting flame and to maintain this flame consistent throughout the experiment. To optimize our work, theoretical study has been conducted relating the flow rate of the premixed gas (gas + air), stoichiometric coal mass flow rate, interparticle distance of the coal particles, number of particles and the max. coal mass flow rate needed to maintain a specific velocity. Runs were made for velocities of 1.5, 2.0, 2.5, and 3.0 m/s.

  12. Optimization of Heat Exchangers

    SciTech Connect

    Ivan Catton

    2010-10-01

    The objective of this research is to develop tools to design and optimize heat exchangers (HE) and compact heat exchangers (CHE) for intermediate loop heat transport systems found in the very high temperature reator (VHTR) and other Generation IV designs by addressing heat transfer surface augmentation and conjugate modeling. To optimize heat exchanger, a fast running model must be created that will allow for multiple designs to be compared quickly. To model a heat exchanger, volume averaging theory, VAT, is used. VAT allows for the conservation of mass, momentum and energy to be solved for point by point in a 3 dimensional computer model of a heat exchanger. The end product of this project is a computer code that can predict an optimal configuration for a heat exchanger given only a few constraints (input fluids, size, cost, etc.). As VAT computer code can be used to model characteristics )pumping power, temperatures, and cost) of heat exchangers more quickly than traditional CFD or experiment, optimization of every geometric parameter simultaneously can be made. Using design of experiment, DOE and genetric algorithms, GE, to optimize the results of the computer code will improve heat exchanger disign.

  13. Stabilized Acoustic Levitation of Dense Materials Using a High-Powered Siren

    NASA Technical Reports Server (NTRS)

    Gammell, P. M.; Croonquist, A.; Wang, T. G.

    1982-01-01

    Stabilized acoustic levitation and manipulation of dense (e.g., steel) objects of 1 cm diameter, using a high powered siren, was demonstrated in trials that investigated the harmonic content and spatial distribution of the acoustic field, as well as the effect of sample position and reflector geometries on the acoustic field. Although further optimization is possible, the most stable operation achieved is expected to be adequate for most containerless processing applications. Best stability was obtained with an open reflector system, using a flat lower reflector and a slightly concave upper one. Operation slightly below resonance enhances stability as this minimizes the second harmonic, which is suspected of being a particularly destabilizing influence.

  14. Phonological coding during reading

    PubMed Central

    Leinenger, Mallorie

    2014-01-01

    The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early (pre-lexical) or that phonological codes come online late (post-lexical)) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eyetracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model (Van Order, 1987), dual-route model (e.g., Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001), parallel distributed processing model (Seidenberg & McClelland, 1989)) are discussed. PMID:25150679

  15. Industrial Computer Codes

    NASA Technical Reports Server (NTRS)

    Shapiro, Wilbur

    1996-01-01

    This is an overview of new and updated industrial codes for seal design and testing. GCYLT (gas cylindrical seals -- turbulent), SPIRALI (spiral-groove seals -- incompressible), KTK (knife to knife) Labyrinth Seal Code, and DYSEAL (dynamic seal analysis) are covered. CGYLT uses G-factors for Poiseuille and Couette turbulence coefficients. SPIRALI is updated to include turbulence and inertia, but maintains the narrow groove theory. KTK labyrinth seal code handles straight or stepped seals. And DYSEAL provides dynamics for the seal geometry.

  16. Doubled Color Codes

    NASA Astrophysics Data System (ADS)

    Bravyi, Sergey

    Combining protection from noise and computational universality is one of the biggest challenges in the fault-tolerant quantum computing. Topological stabilizer codes such as the 2D surface code can tolerate a high level of noise but implementing logical gates, especially non-Clifford ones, requires a prohibitively large overhead due to the need of state distillation. In this talk I will describe a new family of 2D quantum error correcting codes that enable a transversal implementation of all logical gates required for the universal quantum computing. Transversal logical gates (TLG) are encoded operations that can be realized by applying some single-qubit rotation to each physical qubit. TLG are highly desirable since they introduce no overhead and do not spread errors. It has been known before that a quantum code can have only a finite number of TLGs which rules out computational universality. Our scheme circumvents this no-go result by combining TLGs of two different quantum codes using the gauge-fixing method pioneered by Paetznick and Reichardt. The first code, closely related to the 2D color code, enables a transversal implementation of all single-qubit Clifford gates such as the Hadamard gate and the π / 2 phase shift. The second code that we call a doubled color code provides a transversal T-gate, where T is the π / 4 phase shift. The Clifford+T gate set is known to be computationally universal. The two codes can be laid out on the honeycomb lattice with two qubits per site such that the code conversion requires parity measurements for six-qubit Pauli operators supported on faces of the lattice. I will also describe numerical simulations of logical Clifford+T circuits encoded by the distance-3 doubled color code. Based on a joint work with Andrew Cross.

  17. Aerodynamic design using numerical optimization

    NASA Technical Reports Server (NTRS)

    Murman, E. M.; Chapman, G. T.

    1983-01-01

    The procedure of using numerical optimization methods coupled with computational fluid dynamic (CFD) codes for the development of an aerodynamic design is examined. Several approaches that replace wind tunnel tests, develop pressure distributions and derive designs, or fulfill preset design criteria are presented. The method of Aerodynamic Design by Numerical Optimization (ADNO) is described and illustrated with examples.

  18. Advanced Imaging Optics Utilizing Wavefront Coding.

    SciTech Connect

    Scrymgeour, David; Boye, Robert; Adelsberger, Kathleen

    2015-06-01

    Image processing offers a potential to simplify an optical system by shifting some of the imaging burden from lenses to the more cost effective electronics. Wavefront coding using a cubic phase plate combined with image processing can extend the system's depth of focus, reducing many of the focus-related aberrations as well as material related chromatic aberrations. However, the optimal design process and physical limitations of wavefront coding systems with respect to first-order optical parameters and noise are not well documented. We examined image quality of simulated and experimental wavefront coded images before and after reconstruction in the presence of noise. Challenges in the implementation of cubic phase in an optical system are discussed. In particular, we found that limitations must be placed on system noise, aperture, field of view and bandwidth to develop a robust wavefront coded system.

  19. FAA Smoke Transport Code

    SciTech Connect

    Domino, Stefan; Luketa-Hanlin, Anay; Gallegos, Carlos

    2006-10-27

    FAA Smoke Transport Code, a physics-based Computational Fluid Dynamics tool, which couples heat, mass, and momentum transfer, has been developed to provide information on smoke transport in cargo compartments with various geometries and flight conditions. The software package contains a graphical user interface for specification of geometry and boundary conditions, analysis module for solving the governing equations, and a post-processing tool. The current code was produced by making substantial improvements and additions to a code obtained from a university. The original code was able to compute steady, uniform, isothermal turbulent pressurization. In addition, a preprocessor and postprocessor were added to arrive at the current software package.

  20. Bar Code Labels

    NASA Technical Reports Server (NTRS)

    1988-01-01

    American Bar Codes, Inc. developed special bar code labels for inventory control of space shuttle parts and other space system components. ABC labels are made in a company-developed anodizing aluminum process and consecutively marketed with bar code symbology and human readable numbers. They offer extreme abrasion resistance and indefinite resistance to ultraviolet radiation, capable of withstanding 700 degree temperatures without deterioration and up to 1400 degrees with special designs. They offer high resistance to salt spray, cleaning fluids and mild acids. ABC is now producing these bar code labels commercially or industrial customers who also need labels to resist harsh environments.

  1. Tokamak Systems Code

    SciTech Connect

    Reid, R.L.; Barrett, R.J.; Brown, T.G.; Gorker, G.E.; Hooper, R.J.; Kalsi, S.S.; Metzler, D.H.; Peng, Y.K.M.; Roth, K.E.; Spampinato, P.T.

    1985-03-01

    The FEDC Tokamak Systems Code calculates tokamak performance, cost, and configuration as a function of plasma engineering parameters. This version of the code models experimental tokamaks. It does not currently consider tokamak configurations that generate electrical power or incorporate breeding blankets. The code has a modular (or subroutine) structure to allow independent modeling for each major tokamak component or system. A primary benefit of modularization is that a component module may be updated without disturbing the remainder of the systems code as long as the imput to or output from the module remains unchanged.

  2. MORSE Monte Carlo code

    SciTech Connect

    Cramer, S.N.

    1984-01-01

    The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described.

  3. Wheat landraces are better qualified as potential gene pools at ultraspaced rather than densely grown conditions.

    PubMed

    Ninou, Elissavet G; Mylonas, Ioannis G; Tsivelikas, Athanasios; Ralli, Parthenopi; Dordas, Christos; Tokatlidis, Ioannis S

    2014-01-01

    The negative relationship between the yield potential of a genotype and its competitive ability may constitute an obstacle to recognize outstanding genotypes within heterogeneous populations. This issue was investigated by growing six heterogeneous wheat landraces along with a pure-line commercial cultivar under both dense and widely spaced conditions. The performance of two landraces showed a perfect match to the above relationship. Although they lagged behind the cultivar by 64 and 38% at the dense stand, the reverse was true with spaced plants where they succeeded in out-yielding the cultivar by 58 and 73%, respectively. It was concluded that dense stand might undervalue a landrace as potential gene pool in order to apply single-plant selection targeting pure-line cultivars, attributable to inability of plants representing high yielding genotypes to exhibit their capacity due to competitive disadvantage. On the other side, the yield expression of individuals is optimized when density is low enough to preclude interplant competition. Therefore, the latter condition appears ideal to identify the most promising landrace for breeding and subsequently recognize the individuals representing the most outstanding genotypes. PMID:24955427

  4. Cooperative Game-Based Energy Efficiency Management over Ultra-Dense Wireless Cellular Networks.

    PubMed

    Li, Ming; Chen, Pengpeng; Gao, Shouwan

    2016-09-13

    Ultra-dense wireless cellular networks have been envisioned as a promising technique for handling the explosive increase of wireless traffic volume. With the extensive deployment of small cells in wireless cellular networks, the network spectral efficiency (SE) is improved with the use of limited frequency. However, the mutual inter-tier and intra-tier interference between or among small cells and macro cells becomes serious. On the other hand, more chances for potential cooperation among different cells are introduced. Energy efficiency (EE) has become one of the most important problems for future wireless networks. This paper proposes a cooperative bargaining game-based method for comprehensive EE management in an ultra-dense wireless cellular network, which highlights the complicated interference influence on energy-saving challenges and the power-coordination process among small cells and macro cells. Especially, a unified EE utility with the consideration of the interference mitigation is proposed to jointly address the SE, the deployment efficiency (DE), and the EE. In particular, closed-form power-coordination solutions for the optimal EE are derived to show the convergence property of the algorithm. Moreover, a simplified algorithm is presented to reduce the complexity of the signaling overhead, which is significant for ultra-dense small cells. Finally, numerical simulations are provided to illustrate the efficiency of the proposed cooperative bargaining game-based and simplified schemes.

  5. Wheat Landraces Are Better Qualified as Potential Gene Pools at Ultraspaced rather than Densely Grown Conditions

    PubMed Central

    Ninou, Elissavet G.; Mylonas, Ioannis G.; Tokatlidis, Ioannis S.

    2014-01-01

    The negative relationship between the yield potential of a genotype and its competitive ability may constitute an obstacle to recognize outstanding genotypes within heterogeneous populations. This issue was investigated by growing six heterogeneous wheat landraces along with a pure-line commercial cultivar under both dense and widely spaced conditions. The performance of two landraces showed a perfect match to the above relationship. Although they lagged behind the cultivar by 64 and 38% at the dense stand, the reverse was true with spaced plants where they succeeded in out-yielding the cultivar by 58 and 73%, respectively. It was concluded that dense stand might undervalue a landrace as potential gene pool in order to apply single-plant selection targeting pure-line cultivars, attributable to inability of plants representing high yielding genotypes to exhibit their capacity due to competitive disadvantage. On the other side, the yield expression of individuals is optimized when density is low enough to preclude interplant competition. Therefore, the latter condition appears ideal to identify the most promising landrace for breeding and subsequently recognize the individuals representing the most outstanding genotypes. PMID:24955427

  6. Cooperative Game-Based Energy Efficiency Management over Ultra-Dense Wireless Cellular Networks.

    PubMed

    Li, Ming; Chen, Pengpeng; Gao, Shouwan

    2016-01-01

    Ultra-dense wireless cellular networks have been envisioned as a promising technique for handling the explosive increase of wireless traffic volume. With the extensive deployment of small cells in wireless cellular networks, the network spectral efficiency (SE) is improved with the use of limited frequency. However, the mutual inter-tier and intra-tier interference between or among small cells and macro cells becomes serious. On the other hand, more chances for potential cooperation among different cells are introduced. Energy efficiency (EE) has become one of the most important problems for future wireless networks. This paper proposes a cooperative bargaining game-based method for comprehensive EE management in an ultra-dense wireless cellular network, which highlights the complicated interference influence on energy-saving challenges and the power-coordination process among small cells and macro cells. Especially, a unified EE utility with the consideration of the interference mitigation is proposed to jointly address the SE, the deployment efficiency (DE), and the EE. In particular, closed-form power-coordination solutions for the optimal EE are derived to show the convergence property of the algorithm. Moreover, a simplified algorithm is presented to reduce the complexity of the signaling overhead, which is significant for ultra-dense small cells. Finally, numerical simulations are provided to illustrate the efficiency of the proposed cooperative bargaining game-based and simplified schemes. PMID:27649170

  7. An Adaptive Channel Access Method for Dynamic Super Dense Wireless Sensor Networks

    PubMed Central

    Lei, Chunyang; Bie, Hongxia; Fang, Gengfa; Zhang, Xuekun

    2015-01-01

    Super dense and distributed wireless sensor networks have become very popular with the development of small cell technology, Internet of Things (IoT), Machine-to-Machine (M2M) communications, Vehicular-to-Vehicular (V2V) communications and public safety networks. While densely deployed wireless networks provide one of the most important and sustainable solutions to improve the accuracy of sensing and spectral efficiency, a new channel access scheme needs to be designed to solve the channel congestion problem introduced by the high dynamics of competing nodes accessing the channel simultaneously. In this paper, we firstly analyzed the channel contention problem using a novel normalized channel contention analysis model which provides information on how to tune the contention window according to the state of channel contention. We then proposed an adaptive channel contention window tuning algorithm in which the contention window tuning rate is set dynamically based on the estimated channel contention level. Simulation results show that our proposed adaptive channel access algorithm based on fast contention window tuning can achieve more than 95% of the theoretical optimal throughput and 0.97 of fairness index especially in dynamic and dense networks. PMID:26633421

  8. An Adaptive Channel Access Method for Dynamic Super Dense Wireless Sensor Networks.

    PubMed

    Lei, Chunyang; Bie, Hongxia; Fang, Gengfa; Zhang, Xuekun

    2015-12-03

    Super dense and distributed wireless sensor networks have become very popular with the development of small cell technology, Internet of Things (IoT), Machine-to-Machine (M2M) communications, Vehicular-to-Vehicular (V2V) communications and public safety networks. While densely deployed wireless networks provide one of the most important and sustainable solutions to improve the accuracy of sensing and spectral efficiency, a new channel access scheme needs to be designed to solve the channel congestion problem introduced by the high dynamics of competing nodes accessing the channel simultaneously. In this paper, we firstly analyzed the channel contention problem using a novel normalized channel contention analysis model which provides information on how to tune the contention window according to the state of channel contention. We then proposed an adaptive channel contention window tuning algorithm in which the contention window tuning rate is set dynamically based on the estimated channel contention level. Simulation results show that our proposed adaptive channel access algorithm based on fast contention window tuning can achieve more than 95 % of the theoretical optimal throughput and 0 . 97 of fairness index especially in dynamic and dense networks.

  9. Cooperative Game-Based Energy Efficiency Management over Ultra-Dense Wireless Cellular Networks

    PubMed Central

    Li, Ming; Chen, Pengpeng; Gao, Shouwan

    2016-01-01

    Ultra-dense wireless cellular networks have been envisioned as a promising technique for handling the explosive increase of wireless traffic volume. With the extensive deployment of small cells in wireless cellular networks, the network spectral efficiency (SE) is improved with the use of limited frequency. However, the mutual inter-tier and intra-tier interference between or among small cells and macro cells becomes serious. On the other hand, more chances for potential cooperation among different cells are introduced. Energy efficiency (EE) has become one of the most important problems for future wireless networks. This paper proposes a cooperative bargaining game-based method for comprehensive EE management in an ultra-dense wireless cellular network, which highlights the complicated interference influence on energy-saving challenges and the power-coordination process among small cells and macro cells. Especially, a unified EE utility with the consideration of the interference mitigation is proposed to jointly address the SE, the deployment efficiency (DE), and the EE. In particular, closed-form power-coordination solutions for the optimal EE are derived to show the convergence property of the algorithm. Moreover, a simplified algorithm is presented to reduce the complexity of the signaling overhead, which is significant for ultra-dense small cells. Finally, numerical simulations are provided to illustrate the efficiency of the proposed cooperative bargaining game-based and simplified schemes. PMID:27649170

  10. An Adaptive Channel Access Method for Dynamic Super Dense Wireless Sensor Networks.

    PubMed

    Lei, Chunyang; Bie, Hongxia; Fang, Gengfa; Zhang, Xuekun

    2015-01-01

    Super dense and distributed wireless sensor networks have become very popular with the development of small cell technology, Internet of Things (IoT), Machine-to-Machine (M2M) communications, Vehicular-to-Vehicular (V2V) communications and public safety networks. While densely deployed wireless networks provide one of the most important and sustainable solutions to improve the accuracy of sensing and spectral efficiency, a new channel access scheme needs to be designed to solve the channel congestion problem introduced by the high dynamics of competing nodes accessing the channel simultaneously. In this paper, we firstly analyzed the channel contention problem using a novel normalized channel contention analysis model which provides information on how to tune the contention window according to the state of channel contention. We then proposed an adaptive channel contention window tuning algorithm in which the contention window tuning rate is set dynamically based on the estimated channel contention level. Simulation results show that our proposed adaptive channel access algorithm based on fast contention window tuning can achieve more than 95 % of the theoretical optimal throughput and 0 . 97 of fairness index especially in dynamic and dense networks. PMID:26633421

  11. Reversibility and efficiency in coding protein information.

    PubMed

    Tamir, Boaz; Priel, Avner

    2010-12-21

    Why the genetic code has a fixed length? Protein information is transferred by coding each amino acid using codons whose length equals 3 for all amino acids. Hence the most probable and the least probable amino acid get a codeword with an equal length. Moreover, the distributions of amino acids found in nature are not uniform and therefore the efficiency of such codes is sub-optimal. The origins of these apparently non-efficient codes are yet unclear. In this paper we propose an a priori argument for the energy efficiency of such codes resulting from their reversibility, in contrast to their time inefficiency. Such codes are reversible in the sense that a primitive processor, reading three letters in each step, can always reverse its operation, undoing its process. We examine the codes for the distributions of amino acids that exist in nature and show that they could not be both time efficient and reversible. We investigate a family of Zipf-type distributions and present their efficient (non-fixed length) prefix code, their graphs, and the condition for their reversibility. We prove that for a large family of such distributions, if the code is time efficient, it could not be reversible. In other words, if pre-biotic processes demand reversibility, the protein code could not be time efficient. The benefits of reversibility are clear: reversible processes are adiabatic, namely, they dissipate a very small amount of energy. Such processes must be done slowly enough; therefore time efficiency is non-important. It is reasonable to assume that early biochemical complexes were more prone towards energy efficiency, where forward and backward processes were almost symmetrical. PMID:20868696

  12. Multidisciplinary Optimization for Aerospace Using Genetic Optimization

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi; Hahn, Edward E.; Herrera, Claudia Y.

    2007-01-01

    In support of the ARMD guidelines NASA's Dryden Flight Research Center is developing a multidisciplinary design and optimization tool This tool will leverage existing tools and practices, and allow the easy integration and adoption of new state-of-the-art software. Optimization has made its way into many mainstream applications. For example NASTRAN(TradeMark) has its solution sequence 200 for Design Optimization, and MATLAB(TradeMark) has an Optimization Tool box. Other packages, such as ZAERO(TradeMark) aeroelastic panel code and the CFL3D(TradeMark) Navier-Stokes solver have no built in optimizer. The goal of the tool development is to generate a central executive capable of using disparate software packages ina cross platform network environment so as to quickly perform optimization and design tasks in a cohesive streamlined manner. A provided figure (Figure 1) shows a typical set of tools and their relation to the central executive. Optimization can take place within each individual too, or in a loop between the executive and the tool, or both.

  13. A finite element code for electric motor design

    NASA Technical Reports Server (NTRS)

    Campbell, C. Warren

    1994-01-01

    FEMOT is a finite element program for solving the nonlinear magnetostatic problem. This version uses nonlinear, Newton first order elements. The code can be used for electric motor design and analysis. FEMOT can be embedded within an optimization code that will vary nodal coordinates to optimize the motor design. The output from FEMOT can be used to determine motor back EMF, torque, cogging, and magnet saturation. It will run on a PC and will be available to anyone who wants to use it.

  14. Entropy-Based Bounds On Redundancies Of Huffman Codes

    NASA Technical Reports Server (NTRS)

    Smyth, Padhraic J.

    1992-01-01

    Report presents extension of theory of redundancy of binary prefix code of Huffman type which includes derivation of variety of bounds expressed in terms of entropy of source and size of alphabet. Recent developments yielded bounds on redundancy of Huffman code in terms of probabilities of various components in source alphabet. In practice, redundancies of optimal prefix codes often closer to 0 than to 1.

  15. Research on universal combinatorial coding.

    PubMed

    Lu, Jun; Zhang, Zhuo; Mo, Juan

    2014-01-01

    The conception of universal combinatorial coding is proposed. Relations exist more or less in many coding methods. It means that a kind of universal coding method is objectively existent. It can be a bridge connecting many coding methods. Universal combinatorial coding is lossless and it is based on the combinatorics theory. The combinational and exhaustive property make it closely related with the existing code methods. Universal combinatorial coding does not depend on the probability statistic characteristic of information source, and it has the characteristics across three coding branches. It has analyzed the relationship between the universal combinatorial coding and the variety of coding method and has researched many applications technologies of this coding method. In addition, the efficiency of universal combinatorial coding is analyzed theoretically. The multicharacteristic and multiapplication of universal combinatorial coding are unique in the existing coding methods. Universal combinatorial coding has theoretical research and practical application value.

  16. Research on universal combinatorial coding.

    PubMed

    Lu, Jun; Zhang, Zhuo; Mo, Juan

    2014-01-01

    The conception of universal combinatorial coding is proposed. Relations exist more or less in many coding methods. It means that a kind of universal coding method is objectively existent. It can be a bridge connecting many coding methods. Universal combinatorial coding is lossless and it is based on the combinatorics theory. The combinational and exhaustive property make it closely related with the existing code methods. Universal combinatorial coding does not depend on the probability statistic characteristic of information source, and it has the characteristics across three coding branches. It has analyzed the relationship between the universal combinatorial coding and the variety of coding method and has researched many applications technologies of this coding method. In addition, the efficiency of universal combinatorial coding is analyzed theoretically. The multicharacteristic and multiapplication of universal combinatorial coding are unique in the existing coding methods. Universal combinatorial coding has theoretical research and practical application value. PMID:24772019

  17. Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding

    PubMed Central

    Wu, Yueying; Jia, Kebin; Gao, Guandong

    2016-01-01

    In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content change (CC). Secondly, a CU coding tree probability model is proposed for modeling and predicting CU distribution. Eventually, a CU coding tree probability update is proposed, aiming to address probabilistic model distortion problems caused by CC. Experimental results show that the proposed low complexity CU coding tree mechanism significantly reduces encoding time by 27% for lossy coding and 42% for visually lossless coding and lossless coding. The proposed low complexity CU coding tree mechanism devotes to improving coding performance under various application conditions. PMID:26999741

  18. Fabrication, Properties and Applications of Dense Hydroxyapatite: A Review.

    PubMed

    Prakasam, Mythili; Locs, Janis; Salma-Ancane, Kristine; Loca, Dagnija; Largeteau, Alain; Berzina-Cimdina, Liga

    2015-12-21

    In the last five decades, there have been vast advances in the field of biomaterials, including ceramics, glasses, glass-ceramics and metal alloys. Dense and porous ceramics have been widely used for various biomedical applications. Current applications of bioceramics include bone grafts, spinal fusion, bone repairs, bone fillers, maxillofacial reconstruction, etc. Amongst the various calcium phosphate compositions, hydroxyapatite, which has a composition similar to human bone, has attracted wide interest. Much emphasis is given to tissue engineering, both in porous and dense ceramic forms. The current review focusses on the various applications of dense hydroxyapatite and other dense biomaterials on the aspects of transparency and the mechanical and electrical behavior. Prospective future applications, established along the aforesaid applications of hydroxyapatite, appear to be promising regarding bone bonding, advanced medical treatment methods, improvement of the mechanical strength of artificial bone grafts and better in vitro/in vivo methodologies to afford more particular outcomes.

  19. Stopping Power in Dense Plasmas: Models, Simulations and Experiments

    NASA Astrophysics Data System (ADS)

    Grabowski, Paul; Fichtl, Chris; Graziani, Frank; Hazi, Andrew; Murillo, Michael; Sheperd, Ronnie; Surh, Mike; Cimarron Collaboration

    2011-10-01

    Our goal is to conclusively determine the minimal model for stopping power in dense plasmas via a three-pronged theoretical, simulation, and experimental program. Stopping power in dense plasma is important for ion beam heating of targets (e.g., fast ignition) and alpha particle energy deposition in inertial confinement fusion targets. We wish to minimize our uncertainties in the stopping power by comparing a wide range of theoretical approaches to both detailed molecular dynamics (MD) simulations and experiments. The largest uncertainties occur for slow-to-moderate velocity projectiles, dense plasmas, and highly charged projectiles. We have performed MD simulations of a classical, one component plasma to reveal where there are weaknesses in our kinetic theories of stopping power, over a wide range of plasma conditions. We have also performed stopping experiments of protons in heated warm dense carbon for validation of such models, including MD calculations, of realistic plasmas for which bound contributions are important.

  20. Ion-ion dynamic structure factor of warm dense mixtures

    DOE PAGESBeta

    Gill, N. M.; Heinonen, R. A.; Starrett, C. E.; Saumon, D.

    2015-06-25

    In this study, the ion-ion dynamic structure factor of warm dense matter is determined using the recently developed pseudoatom molecular dynamics method [Starrett et al., Phys. Rev. E 91, 013104 (2015)]. The method uses density functional theory to determine ion-ion pair interaction potentials that have no free parameters. These potentials are used in classical molecular dynamics simulations. This constitutes a computationally efficient and realistic model of dense plasmas. Comparison with recently published simulations of the ion-ion dynamic structure factor and sound speed of warm dense aluminum finds good to reasonable agreement. Using this method, we make predictions of the ion-ionmore » dynamical structure factor and sound speed of a warm dense mixture—equimolar carbon-hydrogen. This material is commonly used as an ablator in inertial confinement fusion capsules, and our results are amenable to direct experimental measurement.« less

  1. Ion-ion dynamic structure factor of warm dense mixtures

    SciTech Connect

    Gill, N. M.; Heinonen, R. A.; Starrett, C. E.; Saumon, D.

    2015-06-25

    In this study, the ion-ion dynamic structure factor of warm dense matter is determined using the recently developed pseudoatom molecular dynamics method [Starrett et al., Phys. Rev. E 91, 013104 (2015)]. The method uses density functional theory to determine ion-ion pair interaction potentials that have no free parameters. These potentials are used in classical molecular dynamics simulations. This constitutes a computationally efficient and realistic model of dense plasmas. Comparison with recently published simulations of the ion-ion dynamic structure factor and sound speed of warm dense aluminum finds good to reasonable agreement. Using this method, we make predictions of the ion-ion dynamical structure factor and sound speed of a warm dense mixture—equimolar carbon-hydrogen. This material is commonly used as an ablator in inertial confinement fusion capsules, and our results are amenable to direct experimental measurement.

  2. Fabrication, Properties and Applications of Dense Hydroxyapatite: A Review

    PubMed Central

    Prakasam, Mythili; Locs, Janis; Salma-Ancane, Kristine; Loca, Dagnija; Largeteau, Alain; Berzina-Cimdina, Liga

    2015-01-01

    In the last five decades, there have been vast advances in the field of biomaterials, including ceramics, glasses, glass-ceramics and metal alloys. Dense and porous ceramics have been widely used for various biomedical applications. Current applications of bioceramics include bone grafts, spinal fusion, bone repairs, bone fillers, maxillofacial reconstruction, etc. Amongst the various calcium phosphate compositions, hydroxyapatite, which has a composition similar to human bone, has attracted wide interest. Much emphasis is given to tissue engineering, both in porous and dense ceramic forms. The current review focusses on the various applications of dense hydroxyapatite and other dense biomaterials on the aspects of transparency and the mechanical and electrical behavior. Prospective future applications, established along the aforesaid applications of hydroxyapatite, appear to be promising regarding bone bonding, advanced medical treatment methods, improvement of the mechanical strength of artificial bone grafts and better in vitro/in vivo methodologies to afford more particular outcomes. PMID:26703750

  3. Code of Ethics

    ERIC Educational Resources Information Center

    Division for Early Childhood, Council for Exceptional Children, 2009

    2009-01-01

    The Code of Ethics of the Division for Early Childhood (DEC) of the Council for Exceptional Children is a public statement of principles and practice guidelines supported by the mission of DEC. The foundation of this Code is based on sound ethical reasoning related to professional practice with young children with disabilities and their families…

  4. Lichenase and coding sequences

    DOEpatents

    Li, Xin-Liang; Ljungdahl, Lars G.; Chen, Huizhong

    2000-08-15

    The present invention provides a fungal lichenase, i.e., an endo-1,3-1,4-.beta.-D-glucanohydrolase, its coding sequence, recombinant DNA molecules comprising the lichenase coding sequences, recombinant host cells and methods for producing same. The present lichenase is from Orpinomyces PC-2.

  5. Synthesizing Certified Code

    NASA Technical Reports Server (NTRS)

    Whalen, Michael; Schumann, Johann; Fischer, Bernd

    2002-01-01

    Code certification is a lightweight approach to demonstrate software quality on a formal level. Its basic idea is to require producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates which can be checked independently. Since code certification uses the same underlying technology as program verification, it also requires many detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding theses annotations to the code is time-consuming and error-prone. We address this problem by combining code certification with automatic program synthesis. We propose an approach to generate simultaneously, from a high-level specification, code and all annotations required to certify generated code. Here, we describe a certification extension of AUTOBAYES, a synthesis tool which automatically generates complex data analysis programs from compact specifications. AUTOBAYES contains sufficient high-level domain knowledge to generate detailed annotations. This allows us to use a general-purpose verification condition generator to produce a set of proof obligations in first-order logic. The obligations are then discharged using the automated theorem E-SETHEO. We demonstrate our approach by certifying operator safety for a generated iterative data classification program without manual annotation of the code.

  6. Electron-ion temperature equilibration in warm dense tantalum

    SciTech Connect

    Doppner, T; LePape, S.; Ma, T.; Pak, A.; Hartley, N. J.; Peters, L.; Gregori, G.; Belancourt, P.; Drake, R. P.; Chapman, D. A.; Richardson, S.; Gericke, D. O.; Glenzer, S. H.; Khaghani, D.; Neumayer, P.; Vorberger, J.; White, T. G.

    2014-11-05

    We present measurements of electron-ion temperature equilibration in proton-heated tantalum, under warm dense matter conditions. Our results agree with theoretical predictions for metals calculated using input data from ab initio simulations. Furthermore, the fast relaxation observed in the experiment contrasts with much longer equilibration times found in proton heated carbon, indicating that the energy flow pathways in warm dense matter are far from being fully understood.

  7. Measurement of electron-ion relaxation in warm dense copper

    DOE PAGESBeta

    Cho, B. I.; Ogitsu, T.; Engelhorn, K.; Correa, A. A.; Ping, Y.; Lee, J. W.; Bae, L. J.; Prendergast, D.; Falcone, R. W.; Heimann, P. A.

    2016-01-06

    Experimental investigation of electron-ion coupling and electron heat capacity of copper in warm and dense states are presented. From time-resolved x-ray absorption spectroscopy, the temporal evolution of electron temperature is obtained for non-equilibrium warm dense copper heated by an intense femtosecond laser pulse. Electron heat capacity and electron-ion coupling are inferred from the initial electron temperature and its decrease over 10 ps. As a result, data are compared with various theoretical models.

  8. Mapping CS in starburst galaxies: Disentangling and characterising dense gas

    NASA Astrophysics Data System (ADS)

    Kelly, G.; Viti, S.; Bayet, E.; Aladro, R.; Yates, J.

    2015-06-01

    Aims: We observe the dense gas tracer CS in two nearby starburst galaxies to determine how the conditions of the dense gas varies across the circumnuclear regions in starburst galaxies. Methods: Using the IRAM-30m telescope, we mapped the distribution of the CS(2-1) and CS(3-2) lines in the circumnuclear regions of the nearby starburst galaxies NGC 3079 and NGC 6946. We also detected formaldehyde (H2CO) and methanol (CH3OH) in both galaxies. We marginally detect the isotopologue C34S. Results: We calculate column densities under LTE conditions for CS and CH3OH. Using the detections accumulated here to guide our inputs, we link a time and depth dependent chemical model with a molecular line radiative transfer model; we reproduce the observations, showing how conditions where CS is present are likely to vary away from the galactic centres. Conclusions: Using the rotational diagram method for CH3OH, we obtain a lower limit temperature of 14 K. In addition to this, by comparing the chemical and radiative transfer models to observations, we determine the properties of the dense gas as traced by CS (and CH3OH). We also estimate the quantity of the dense gas. We find that, provided there are between 105 and 106 dense cores in our beam, for both target galaxies, emission of CS from warm (T = 100-400 K), dense (n(H2) = 105-6 cm-3) cores, possibly with a high cosmic ray ionisation rate (ζ = 100ζ0) best describes conditions for our central pointing. In NGC 6946, conditions are generally cooler and/or less dense further from the centre, whereas in NGC 3079, conditions are more uniform. The inclusion of shocks allows for more efficient CS formation, which means that gas that is less dense by an order of magnitude is required to replicate observations in some cases.

  9. Measurement of Electron-Ion Relaxation in Warm Dense Copper

    NASA Astrophysics Data System (ADS)

    Cho, B. I.; Ogitsu, T.; Engelhorn, K.; Correa, A. A.; Ping, Y.; Lee, J. W.; Bae, L. J.; Prendergast, D.; Falcone, R. W.; Heimann, P. A.

    2016-01-01

    Experimental investigation of electron-ion coupling and electron heat capacity of copper in warm and dense states are presented. From time-resolved x-ray absorption spectroscopy, the temporal evolution of electron temperature is obtained for non-equilibrium warm dense copper heated by an intense femtosecond laser pulse. Electron heat capacity and electron-ion coupling are inferred from the initial electron temperature and its decrease over 10 ps. Data are compared with various theoretical models.

  10. Electron-ion temperature equilibration in warm dense tantalum

    DOE PAGESBeta

    Doppner, T; LePape, S.; Ma, T.; Pak, A.; Hartley, N. J.; Peters, L.; Gregori, G.; Belancourt, P.; Drake, R. P.; Chapman, D. A.; et al

    2014-11-05

    We present measurements of electron-ion temperature equilibration in proton-heated tantalum, under warm dense matter conditions. Our results agree with theoretical predictions for metals calculated using input data from ab initio simulations. Furthermore, the fast relaxation observed in the experiment contrasts with much longer equilibration times found in proton heated carbon, indicating that the energy flow pathways in warm dense matter are far from being fully understood.

  11. Measurement of Electron-Ion Relaxation in Warm Dense Copper

    PubMed Central

    Cho, B. I.; Ogitsu, T.; Engelhorn, K.; Correa, A. A.; Ping, Y.; Lee, J. W.; Bae, L. J.; Prendergast, D.; Falcone, R. W.; Heimann, P. A.

    2016-01-01

    Experimental investigation of electron-ion coupling and electron heat capacity of copper in warm and dense states are presented. From time-resolved x-ray absorption spectroscopy, the temporal evolution of electron temperature is obtained for non-equilibrium warm dense copper heated by an intense femtosecond laser pulse. Electron heat capacity and electron-ion coupling are inferred from the initial electron temperature and its decrease over 10 ps. Data are compared with various theoretical models. PMID:26733236

  12. Simulations of the interaction of intense petawatt laser pulses with dense Z-pinch plasmas : final report LDRD 39670.

    SciTech Connect

    Welch, Dale Robert; MacFarlane, Joseph John; Mehlhorn, Thomas Alan; Campbell, Robert B.

    2004-11-01

    We have studied the feasibility of using the 3D fully electromagnetic implicit hybrid particle code LSP (Large Scale Plasma) to study laser plasma interactions with dense, compressed plasmas like those created with Z, and which might be created with the planned ZR. We have determined that with the proper additional physics and numerical algorithms developed during the LDRD period, LSP was transformed into a unique platform for studying such interactions. Its uniqueness stems from its ability to consider realistic compressed densities and low initial target temperatures (if required), an ability that conventional PIC codes do not possess. Through several test cases, validations, and applications to next generation machines described in this report, we have established the suitability of the code to look at fast ignition issues for ZR, as well as other high-density laser plasma interaction problems relevant to the HEDP program at Sandia (e.g. backlighting).

  13. Combustion chamber analysis code

    NASA Astrophysics Data System (ADS)

    Przekwas, A. J.; Lai, Y. G.; Krishnan, A.; Avva, R. K.; Giridharan, M. G.

    1993-05-01

    A three-dimensional, time dependent, Favre averaged, finite volume Navier-Stokes code has been developed to model compressible and incompressible flows (with and without chemical reactions) in liquid rocket engines. The code has a non-staggered formulation with generalized body-fitted-coordinates (BFC) capability. Higher order differencing methodologies such as MUSCL and Osher-Chakravarthy schemes are available. Turbulent flows can be modeled using any of the five turbulent models present in the code. A two-phase, two-liquid, Lagrangian spray model has been incorporated into the code. Chemical equilibrium and finite rate reaction models are available to model chemically reacting flows. The discrete ordinate method is used to model effects of thermal radiation. The code has been validated extensively against benchmark experimental data and has been applied to model flows in several propulsion system components of the SSME and the STME.

  14. Combustion chamber analysis code

    NASA Technical Reports Server (NTRS)

    Przekwas, A. J.; Lai, Y. G.; Krishnan, A.; Avva, R. K.; Giridharan, M. G.

    1993-01-01

    A three-dimensional, time dependent, Favre averaged, finite volume Navier-Stokes code has been developed to model compressible and incompressible flows (with and without chemical reactions) in liquid rocket engines. The code has a non-staggered formulation with generalized body-fitted-coordinates (BFC) capability. Higher order differencing methodologies such as MUSCL and Osher-Chakravarthy schemes are available. Turbulent flows can be modeled using any of the five turbulent models present in the code. A two-phase, two-liquid, Lagrangian spray model has been incorporated into the code. Chemical equilibrium and finite rate reaction models are available to model chemically reacting flows. The discrete ordinate method is used to model effects of thermal radiation. The code has been validated extensively against benchmark experimental data and has been applied to model flows in several propulsion system components of the SSME and the STME.

  15. Energy Conservation Code Decoded

    SciTech Connect

    Cole, Pam C.; Taylor, Zachary T.

    2006-09-01

    Designing an energy-efficient, affordable, and comfortable home is a lot easier thanks to a slime, easier to read booklet, the 2006 International Energy Conservation Code (IECC), published in March 2006. States, counties, and cities have begun reviewing the new code as a potential upgrade to their existing codes. Maintained under the public consensus process of the International Code Council, the IECC is designed to do just what its title says: promote the design and construction of energy-efficient homes and commercial buildings. Homes in this case means traditional single-family homes, duplexes, condominiums, and apartment buildings having three or fewer stories. The U.S. Department of Energy, which played a key role in proposing the changes that resulted in the new code, is offering a free training course that covers the residential provisions of the 2006 IECC.

  16. Evolving genetic code

    PubMed Central

    OHAMA, Takeshi; INAGAKI, Yuji; BESSHO, Yoshitaka; OSAWA, Syozo

    2008-01-01

    In 1985, we reported that a bacterium, Mycoplasma capricolum, used a deviant genetic code, namely UGA, a “universal” stop codon, was read as tryptophan. This finding, together with the deviant nuclear genetic codes in not a few organisms and a number of mitochondria, shows that the genetic code is not universal, and is in a state of evolution. To account for the changes in codon meanings, we proposed the codon capture theory stating that all the code changes are non-disruptive without accompanied changes of amino acid sequences of proteins. Supporting evidence for the theory is presented in this review. A possible evolutionary process from the ancient to the present-day genetic code is also discussed. PMID:18941287

  17. Quantum convolutional codes derived from constacyclic codes

    NASA Astrophysics Data System (ADS)

    Yan, Tingsu; Huang, Xinmei; Tang, Yuansheng

    2014-12-01

    In this paper, three families of quantum convolutional codes are constructed. The first one and the second one can be regarded as a generalization of Theorems 3, 4, 7 and 8 [J. Chen, J. Li, F. Yang and Y. Huang, Int. J. Theor. Phys., doi:10.1007/s10773-014-2214-6 (2014)], in the sense that we drop the constraint q ≡ 1 (mod 4). Furthermore, the second one and the third one attain the quantum generalized Singleton bound.

  18. Parallelized event chain algorithm for dense hard sphere and polymer systems

    SciTech Connect

    Kampmann, Tobias A. Boltz, Horst-Holger; Kierfeld, Jan

    2015-01-15

    We combine parallelization and cluster Monte Carlo for hard sphere systems and present a parallelized event chain algorithm for the hard disk system in two dimensions. For parallelization we use a spatial partitioning approach into simulation cells. We find that it is crucial for correctness to ensure detailed balance on the level of Monte Carlo sweeps by drawing the starting sphere of event chains within each simulation cell with replacement. We analyze the performance gains for the parallelized event chain and find a criterion for an optimal degree of parallelization. Because of the cluster nature of event chain moves massive parallelization will not be optimal. Finally, we discuss first applications of the event chain algorithm to dense polymer systems, i.e., bundle-forming solutions of attractive semiflexible polymers.

  19. Optical characterization of nonimaging dish concentrator for the application of dense-array concentrator photovoltaic system.

    PubMed

    Tan, Ming-Hui; Chong, Kok-Keong; Wong, Chee-Woon

    2014-01-20

    Optimization of the design of a nonimaging dish concentrator (NIDC) for a dense-array concentrator photovoltaic system is presented. A new algorithm has been developed to determine configuration of facet mirrors in a NIDC. Analytical formulas were derived to analyze the optical performance of a NIDC and then compared with a simulated result obtained from a numerical method. Comprehensive analysis of optical performance via analytical method has been carried out based on facet dimension and focal distance of the concentrator with a total reflective area of 120 m2. The result shows that a facet dimension of 49.8 cm, focal distance of 8 m, and solar concentration ratio of 411.8 suns is the most optimized design for the lowest cost-per-output power, which is US$1.93 per watt.

  20. Computer optimization of reactor-thermoelectric space power systems

    NASA Technical Reports Server (NTRS)

    Maag, W. L.; Finnegan, P. M.; Fishbach, L. H.

    1973-01-01

    A computer simulation and optimization code that has been developed for nuclear space power systems is described. The results of using this code to analyze two reactor-thermoelectric systems are presented.