Science.gov

Sample records for optimal dense coding

  1. Optimal probabilistic dense coding schemes

    NASA Astrophysics Data System (ADS)

    Kögler, Roger A.; Neves, Leonardo

    2017-04-01

    Dense coding with non-maximally entangled states has been investigated in many different scenarios. We revisit this problem for protocols adopting the standard encoding scheme. In this case, the set of possible classical messages cannot be perfectly distinguished due to the non-orthogonality of the quantum states carrying them. So far, the decoding process has been approached in two ways: (i) The message is always inferred, but with an associated (minimum) error; (ii) the message is inferred without error, but only sometimes; in case of failure, nothing else is done. Here, we generalize on these approaches and propose novel optimal probabilistic decoding schemes. The first uses quantum-state separation to increase the distinguishability of the messages with an optimal success probability. This scheme is shown to include (i) and (ii) as special cases and continuously interpolate between them, which enables the decoder to trade-off between the level of confidence desired to identify the received messages and the success probability for doing so. The second scheme, called multistage decoding, applies only for qudits ( d-level quantum systems with d>2) and consists of further attempts in the state identification process in case of failure in the first one. We show that this scheme is advantageous over (ii) as it increases the mutual information between the sender and receiver.

  2. Optimal dense coding with arbitrary pure entangled states

    SciTech Connect

    Feng, Yuan; Duan, Runyao; Ji, Zhengfeng

    2006-07-15

    We examine dense coding with an arbitrary pure entangled state sharing between the sender and the receiver. Upper bounds on the average success probability in approximate dense coding and on the probability of conclusive results in unambiguous dense coding are derived. We also construct the optimal protocol which saturates the upper bound in each case.

  3. Optimized QKD BB84 protocol using quantum dense coding and CNOT gates: feasibility based on probabilistic optical devices

    NASA Astrophysics Data System (ADS)

    Gueddana, Amor; Attia, Moez; Chatta, Rihab

    2014-05-01

    In this work, we simulate a fiber-based Quantum Key Distribution Protocol (QKDP) BB84 working at the telecoms wavelength 1550 nm with taking into consideration an optimized attack strategy. We consider in our work a quantum channel composed by probabilistic Single Photon Source (SPS), single mode optical Fiber and quantum detector with high efficiency. We show the advantages of using the Quantum Dots (QD) embedded in micro-cavity compared to the Heralded Single Photon Sources (HSPS). Second, we show that Eve is always getting some information depending on the mean photon number per pulse of the used SPS and therefore, we propose an optimized version of the QKDP BB84 based on Quantum Dense Coding (QDC) that could be implemented by quantum CNOT gates. We evaluate the success probability of implementing the optimized QKDP BB84 when using nowadays probabilistic quantum optical devices for circuit realization. We use for our modeling an abstract probabilistic model of a CNOT gate based on linear optical components and having a success probability of sqrt (4/27), we take into consideration the best SPSs realizations, namely the QD and the HSPS, generating a single photon per pulse with a success probability of 0.73 and 0.37, respectively. We show that the protocol is totally secure against attacks but could be correctly implemented only with a success probability of few percent.

  4. DENSE MEDIA CYCLONE OPTIMIZATION

    SciTech Connect

    Gerald H. Luttrell

    2002-01-14

    During the past quarter, float-sink analyses were completed for four of seven circuits evaluated in this project. According to the commercial laboratory, the analyses for the remaining three sites will be finished by mid February 2002. In addition, it was necessary to repeat several of the float-sink tests to resolve problems identified during the analysis of the experimental data. In terms of accomplishments, a website is being prepared to distribute project findings and software to the public. This site will include (i) an operators manual for HMC operation and maintenance (already available in hard copy), (ii) an expert system software package for evaluating and optimizing HMC performance (in development), and (iii) a spreadsheet-based process model for plant designers (in development). Several technology transfer activities were also carried out including the publication of project results in proceedings and the training of plant operations via workshops.

  5. DENSE MEDIA CYCLONE OPTIMIZATION

    SciTech Connect

    Gerald H. Luttrell

    2002-04-11

    The test data obtained from the Baseline Assessment that compares the performance of the density traces to that of different sizes of coal particles is now complete. The experimental results show that the tracer data can indeed be used to accurately predict HMC performance. The following conclusions were drawn: (i) the tracer curve is slightly sharper than curve for coarsest size fraction of coal (probably due to the greater resolution of the tracer technique), (ii) the Ep increases with decreasing coal particle size, and (iii) the Ep values are not excessively large for the well-maintained HMC circuits. The major problems discovered were associated with improper apex-to-vortex finder ratios and particle hang-up due to media segregation. Only one plant yielded test data that were typical of a fully optimized level of performance.

  6. Controlled Dense Coding with the W State

    NASA Astrophysics Data System (ADS)

    Yang, Xue; Bai, Ming-qiang; Mo, Zhi-wen

    2017-09-01

    The average amount of information is an important factor in implementing dense coding. Based on this, we propose two schemes for controlled dense coding by using the three-qubit entangled W state as the quantum channel in this paper. In these schemes, the controller (Charlie) can adjust the local measurement angle 𝜃 to modulate the entanglement, and consequently the average amount of information transmitted from the sender (Alice) to the receiver (Bob). Although the results for the average amounts of information are the same from the different two schemes, the second scheme has advantage over the first scheme.

  7. Dense Coding in a Two-Spin Squeezing Model with Intrinsic Decoherence

    NASA Astrophysics Data System (ADS)

    Zhang, Bing-Bing; Yang, Guo-Hui

    2016-11-01

    Quantum dense coding in a two-spin squeezing model under intrinsic decoherence with different initial states (Werner state and Bell state) is investigated. It shows that dense coding capacity χ oscillates with time and finally reaches different stable values. χ can be enhanced by decreasing the magnetic field Ω and the intrinsic decoherence γ or increasing the squeezing interaction μ, moreover, one can obtain a valid dense coding capacity ( χ satisfies χ > 1) by modulating these parameters. The stable value of χ reveals that the decoherence cannot entirely destroy the dense coding capacity. In addition, decreasing Ω or increasing μ can not only enhance the stable value of χ but also impair the effects of decoherence. As the initial state is the Werner state, the purity r of initial state plays a key role in adjusting the value of dense coding capacity, χ can be significantly increased by improving the purity of initial state. For the initial state is Bell state, the large spin squeezing interaction compared with the magnetic field guarantees the optimal dense coding. One cannot always achieve a valid dense coding capacity for the Werner state, while for the Bell state, the dense coding capacity χ remains stuck at the range of greater than 1.

  8. Relating quantum discord with the quantum dense coding capacity

    SciTech Connect

    Wang, Xin; Qiu, Liang Li, Song; Zhang, Chi; Ye, Bin

    2015-01-15

    We establish the relations between quantum discord and the quantum dense coding capacity in (n + 1)-particle quantum states. A necessary condition for the vanishing discord monogamy score is given. We also find that the loss of quantum dense coding capacity due to decoherence is bounded below by the sum of quantum discord. When these results are restricted to three-particle quantum states, some complementarity relations are obtained.

  9. Code Optimization Techniques

    SciTech Connect

    MAGEE,GLEN I.

    2000-08-03

    Computers transfer data in a number of different ways. Whether through a serial port, a parallel port, over a modem, over an ethernet cable, or internally from a hard disk to memory, some data will be lost. To compensate for that loss, numerous error detection and correction algorithms have been developed. One of the most common error correction codes is the Reed-Solomon code, which is a special subset of BCH (Bose-Chaudhuri-Hocquenghem) linear cyclic block codes. In the AURA project, an unmanned aircraft sends the data it collects back to earth so it can be analyzed during flight and possible flight modifications made. To counter possible data corruption during transmission, the data is encoded using a multi-block Reed-Solomon implementation with a possibly shortened final block. In order to maximize the amount of data transmitted, it was necessary to reduce the computation time of a Reed-Solomon encoding to three percent of the processor's time. To achieve such a reduction, many code optimization techniques were employed. This paper outlines the steps taken to reduce the processing time of a Reed-Solomon encoding and the insight into modern optimization techniques gained from the experience.

  10. Induction technology optimization code

    SciTech Connect

    Caporaso, G.J.; Brooks, A.L.; Kirbie, H.C.

    1992-08-21

    A code has been developed to evaluate relative costs of induction accelerator driver systems for relativistic klystrons. The code incorporates beam generation, transport and pulsed power system constraints to provide an integrated design tool. The code generates an injector/accelerator combination which satisfies the top level requirements and all system constraints once a small number of design choices have been specified (rise time of the injector voltage and aspect ratio of the ferrite induction cores, for example). The code calculates dimensions of accelerator mechanical assemblies and values of all electrical components. Cost factors for machined parts, raw materials and components are applied to yield a total system cost. These costs are then plotted as a function of the two design choices to enable selection of an optimum design based on various criteria. The Induction Technology Optimization Study (ITOS) was undertaken to examine viable combinations of a linear induction accelerator and a relativistic klystron (RK) for high power microwave production. It is proposed, that microwaves from the RK will power a high-gradient accelerator structure for linear collider development. Previous work indicates that the RK will require a nominal 3-MeV, 3-kA electron beam with a 100-ns flat top. The proposed accelerator-RK combination will be a high average power system capable of sustained microwave output at a 300-Hz pulse repetition frequency. The ITOS code models many combinations of injector, accelerator, and pulse power designs that will supply an RK with the beam parameters described above.

  11. Deterministic dense coding and faithful teleportation with multipartite graph states

    SciTech Connect

    Huang, C.-Y.; Yu, I-C.; Lin, F.-L.; Hsu, L.-Y.

    2009-05-15

    We propose schemes to perform the deterministic dense coding and faithful teleportation with multipartite graph states. We also find the sufficient and necessary condition of a viable graph state for the proposed schemes. That is, for the associated graph, the reduced adjacency matrix of the Tanner-type subgraph between senders and receivers should be invertible.

  12. Optimizing Dense Plasma Focus Neutron Yields with Fast Gas Jets

    NASA Astrophysics Data System (ADS)

    McMahon, Matthew; Kueny, Christopher; Stein, Elizabeth; Link, Anthony; Schmidt, Andrea

    2016-10-01

    We report a study using the particle-in-cell code LSP to perform fully kinetic simulations modeling dense plasma focus (DPF) devices with high density gas jets on axis. The high density jet models fast gas puffs which allow for more mass on axis while maintaining the optimal pressure for the DPF. As the density of the jet compared to the background fill increases we find the neutron yield increases, as does the variability in the neutron yield. Introducing perturbations in the jet density allow for consistent seeding of the m =0 instability leading to more consistent ion acceleration and higher neutron yields with less variability. Jets with higher on axis density are found to have the greatest yield. The optimal jet configuration is explored. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  13. DISH CODE A deeply simplified hydrodynamic code for applications to warm dense matter

    SciTech Connect

    More, Richard

    2007-08-22

    DISH is a 1-dimensional (planar) Lagrangian hydrodynamic code intended for application to experiments on warm dense matter. The code is a simplified version of the DPC code written in the Data and Planning Center of the National Institute for Fusion Science in Toki, Japan. DPC was originally intended as a testbed for exploring equation of state and opacity models, but turned out to have a variety of applications. The Dish code is a "deeply simplified hydrodynamic" code, deliberately made as simple as possible. It is intended to be easy to understand, easy to use and easy to change.

  14. Teleportation and dense coding with genuine multipartite entanglement.

    PubMed

    Yeo, Ye; Chua, Wee Kang

    2006-02-17

    We present an explicit protocol E0 for faithfully teleporting an arbitrary two-qubit state via a genuine four-qubit entangled state. By construction, our four-partite state is not reducible to a pair of Bell states. Its properties are compared and contrasted with those of the four-party Greenberger-Horne-Zeilinger and W states. We also give a dense coding scheme D0 involving our state as a shared resource of entanglement. Both D0 and E0 indicate that our four-qubit state is a likely candidate for the genuine four-partite analogue to a Bell state.

  15. Performance analysis of simultaneous dense coding protocol under decoherence

    NASA Astrophysics Data System (ADS)

    Huang, Zhiming; Zhang, Cai; Situ, Haozhen

    2017-09-01

    The simultaneous dense coding (SDC) protocol is useful in designing quantum protocols. We analyze the performance of the SDC protocol under the influence of noisy quantum channels. Six kinds of paradigmatic Markovian noise along with one kind of non-Markovian noise are considered. The joint success probability of both receivers and the success probabilities of one receiver are calculated for three different locking operators. Some interesting properties have been found, such as invariance and symmetry. Among the three locking operators we consider, the SWAP gate is most resistant to noise and results in the same success probabilities for both receivers.

  16. Secure N-dimensional simultaneous dense coding and applications

    NASA Astrophysics Data System (ADS)

    Situ, H.; Qiu, D.; Mateus, P.; Paunković, N.

    2015-12-01

    Simultaneous dense coding (SDC) guarantees that Bob and Charlie simultaneously receive their respective information from Alice in their respective processes of dense coding. The idea is to use the so-called locking operation to “lock” the entanglement channels, thus requiring a joint unlocking operation by Bob and Charlie in order to simultaneously obtain the information sent by Alice. We present some new results on SDC: (1) We propose three SDC protocols, which use different N-dimensional entanglement (Bell state, W state and GHZ state). (2) Besides the quantum Fourier transform, two new locking operators are introduced (the double controlled-NOT operator and the SWAP operator). (3) In the case that spatially distant Bob and Charlie have to finalize the protocol by implementing the unlocking operation through communication, we improve our protocol’s fairness, with respect to Bob and Charlie, by implementing the unlocking operation in series of steps. (4) We improve the security of SDC against the intercept-resend attack. (5) We show that SDC can be used to implement a fair contract signing protocol. (6) We also show that the N-dimensional quantum Fourier transform can act as the locking operator in simultaneous teleportation of N-level quantum systems.

  17. Modular optimization code package: MOZAIK

    NASA Astrophysics Data System (ADS)

    Bekar, Kursat B.

    This dissertation addresses the development of a modular optimization code package, MOZAIK, for geometric shape optimization problems in nuclear engineering applications. MOZAIK's first mission, determining the optimal shape of the D2O moderator tank for the current and new beam tube configurations for the Penn State Breazeale Reactor's (PSBR) beam port facility, is used to demonstrate its capabilities and test its performance. MOZAIK was designed as a modular optimization sequence including three primary independent modules: the initializer, the physics and the optimizer, each having a specific task. By using fixed interface blocks among the modules, the code attains its two most important characteristics: generic form and modularity. The benefit of this modular structure is that the contents of the modules can be switched depending on the requirements of accuracy, computational efficiency, or compatibility with the other modules. Oak Ridge National Laboratory's discrete ordinates transport code TORT was selected as the transport solver in the physics module of MOZAIK, and two different optimizers, Min-max and Genetic Algorithms (GA), were implemented in the optimizer module of the code package. A distributed memory parallelism was also applied to MOZAIK via MPI (Message Passing Interface) to execute the physics module concurrently on a number of processors for various states in the same search. Moreover, dynamic scheduling was enabled to enhance load balance among the processors while running MOZAIK's physics module thus improving the parallel speedup and efficiency. In this way, the total computation time consumed by the physics module is reduced by a factor close to M, where M is the number of processors. This capability also encourages the use of MOZAIK for shape optimization problems in nuclear applications because many traditional codes related to radiation transport do not have parallel execution capability. A set of computational models based on the

  18. Experimental realization of the analogy of quantum dense coding in classical optics

    SciTech Connect

    Yang, Zhenwei; Sun, Yifan; Li, Pengyun; Zhang, Xiong; Song, Xinbing E-mail: songxinbing@bit.edu.cn; Zhang, Xiangdong E-mail: songxinbing@bit.edu.cn

    2016-06-15

    We report on the experimental realization of the analogy of quantum dense coding in classical optical communication using classical optical correlations. Compared to quantum dense coding that uses pairs of photons entangled in polarization, we find that the proposed design exhibits many advantages. Considering that it is convenient to realize in optical communication, the attainable channel capacity in the experiment for dense coding can reach 2 bits, which is higher than that of the usual quantum coding capacity (1.585 bits). This increased channel capacity has been proven experimentally by transmitting ASCII characters in 12 quaternary digitals instead of the usual 24 bits.

  19. Power System Optimization Codes Modified

    NASA Technical Reports Server (NTRS)

    Juhasz, Albert J.

    1999-01-01

    A major modification of and addition to existing Closed Brayton Cycle (CBC) space power system optimization codes was completed. These modifications relate to the global minimum mass search driver programs containing three nested iteration loops comprising iterations on cycle temperature ratio, and three separate pressure ratio iteration loops--one loop for maximizing thermodynamic efficiency, one for minimizing radiator area, and a final loop for minimizing overall power system mass. Using the method of steepest ascent, the code sweeps through the pressure ratio space repeatedly, each time with smaller iteration step sizes, so that the three optimum pressure ratios can be obtained to any desired accuracy for each of the objective functions referred to above (i.e., maximum thermodynamic efficiency, minimum radiator area, and minimum system mass). Two separate options for the power system heat source are available: 1. A nuclear fission reactor can be used. It is provided with a radiation shield 1. (composed of a lithium hydride (LiH) neutron shield and tungsten (W) gamma shield). Suboptions can be used to select the type of reactor (i.e., fast spectrum liquid metal cooled or epithermal high-temperature gas reactor (HTGR)). 2. A solar heat source can be used. This option includes a parabolic concentrator and heat receiver for raising the temperature of the recirculating working fluid. A useful feature of the code modifications is that key cycle parameters are displayed, including the overall system specific mass in kilograms per kilowatt and the system specific power in watts per kilogram, as the results for each temperature ratio are computed. As the minimum mass temperature ratio is encountered, a message is printed out. Several levels of detailed information on cycle state points, subsystem mass results, and radiator temperature profiles are stored for this temperature ratio condition and can be displayed or printed by users.

  20. Power System Optimization Codes Modified

    NASA Technical Reports Server (NTRS)

    Juhasz, Albert J.

    1999-01-01

    A major modification of and addition to existing Closed Brayton Cycle (CBC) space power system optimization codes was completed. These modifications relate to the global minimum mass search driver programs containing three nested iteration loops comprising iterations on cycle temperature ratio, and three separate pressure ratio iteration loops--one loop for maximizing thermodynamic efficiency, one for minimizing radiator area, and a final loop for minimizing overall power system mass. Using the method of steepest ascent, the code sweeps through the pressure ratio space repeatedly, each time with smaller iteration step sizes, so that the three optimum pressure ratios can be obtained to any desired accuracy for each of the objective functions referred to above (i.e., maximum thermodynamic efficiency, minimum radiator area, and minimum system mass). Two separate options for the power system heat source are available: 1. A nuclear fission reactor can be used. It is provided with a radiation shield 1. (composed of a lithium hydride (LiH) neutron shield and tungsten (W) gamma shield). Suboptions can be used to select the type of reactor (i.e., fast spectrum liquid metal cooled or epithermal high-temperature gas reactor (HTGR)). 2. A solar heat source can be used. This option includes a parabolic concentrator and heat receiver for raising the temperature of the recirculating working fluid. A useful feature of the code modifications is that key cycle parameters are displayed, including the overall system specific mass in kilograms per kilowatt and the system specific power in watts per kilogram, as the results for each temperature ratio are computed. As the minimum mass temperature ratio is encountered, a message is printed out. Several levels of detailed information on cycle state points, subsystem mass results, and radiator temperature profiles are stored for this temperature ratio condition and can be displayed or printed by users.

  1. Study of controlled dense coding with some discrete tripartite and quadripartite states

    NASA Astrophysics Data System (ADS)

    Roy, Sovik; Ghosh, Biplab

    2015-07-01

    The paper presents a detailed study of controlled dense coding scheme for different types of three and four-particle states. It consists of GHZ state, GHZ type states, maximal slice (MS), state, 4-particle GHZ state and W class of states. It is shown that GHZ-type states can be used for controlled dense coding in a probabilistic sense. We have shown relations among parameter of GHZ type state, concurrence of the shared bipartite state by two parties with respect to GHZ type and Charlie's measurement angle θ. The GHZ states as a special case of MS states, depending on parameters, have also been considered here. We have seen that tripartite W state and quadripartite W state cannot be used in controlled dense coding whereas |Wn>ABC states can be used probabilistically. Finally, we have investigated controlled dense coding scheme for tripartite qutrit states.

  2. Quantum Dense Coding About a Two-Qubit Heisenberg XYZ Model

    NASA Astrophysics Data System (ADS)

    Xu, Hui-Yun; Yang, Guo-Hui

    2017-09-01

    By taking into account the nonuniform magnetic field, the quantum dense coding with thermal entangled states of a two-qubit anisotropic Heisenberg XYZ chain are investigated in detail. We mainly show the different properties about the dense coding capacity ( χ) with the changes of different parameters. It is found that dense coding capacity χ can be enhanced by decreasing the magnetic field B, the degree of inhomogeneity b and temperature T, or increasing the coupling constant along z-axis J z . In addition, we also find χ remains the stable value as the change of the anisotropy of the XY plane Δ in a certain temperature condition. Through studying different parameters effect on χ, it presents that we can properly turn the values of B, b, J z , Δ or adjust the temperature T to obtain a valid dense coding capacity ( χ satisfies χ > 1). Moreover, the temperature plays a key role in adjusting the value of dense coding capacity χ. The valid dense coding capacity could be always obtained in the lower temperature-limit case.

  3. TRACKING CODE DEVELOPMENT FOR BEAM DYNAMICS OPTIMIZATION

    SciTech Connect

    Yang, L.

    2011-03-28

    Dynamic aperture (DA) optimization with direct particle tracking is a straight forward approach when the computing power is permitted. It can have various realistic errors included and is more close than theoretical estimations. In this approach, a fast and parallel tracking code could be very helpful. In this presentation, we describe an implementation of storage ring particle tracking code TESLA for beam dynamics optimization. It supports MPI based parallel computing and is robust as DA calculation engine. This code has been used in the NSLS-II dynamics optimizations and obtained promising performance.

  4. Search for optimal distance spectrum convolutional codes

    NASA Technical Reports Server (NTRS)

    Connor, Matthew C.; Perez, Lance C.; Costello, Daniel J., Jr.

    1993-01-01

    In order to communicate reliably and to reduce the required transmitter power, NASA uses coded communication systems on most of their deep space satellites and probes (e.g. Pioneer, Voyager, Galileo, and the TDRSS network). These communication systems use binary convolutional codes. Better codes make the system more reliable and require less transmitter power. However, there are no good construction techniques for convolutional codes. Thus, to find good convolutional codes requires an exhaustive search over the ensemble of all possible codes. In this paper, an efficient convolutional code search algorithm was implemented on an IBM RS6000 Model 580. The combination of algorithm efficiency and computational power enabled us to find, for the first time, the optimal rate 1/2, memory 14, convolutional code.

  5. Optimal Codes for the Burst Erasure Channel

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon

    2010-01-01

    Deep space communications over noisy channels lead to certain packets that are not decodable. These packets leave gaps, or bursts of erasures, in the data stream. Burst erasure correcting codes overcome this problem. These are forward erasure correcting codes that allow one to recover the missing gaps of data. Much of the recent work on this topic concentrated on Low-Density Parity-Check (LDPC) codes. These are more complicated to encode and decode than Single Parity Check (SPC) codes or Reed-Solomon (RS) codes, and so far have not been able to achieve the theoretical limit for burst erasure protection. A block interleaved maximum distance separable (MDS) code (e.g., an SPC or RS code) offers near-optimal burst erasure protection, in the sense that no other scheme of equal total transmission length and code rate could improve the guaranteed correctible burst erasure length by more than one symbol. The optimality does not depend on the length of the code, i.e., a short MDS code block interleaved to a given length would perform as well as a longer MDS code interleaved to the same overall length. As a result, this approach offers lower decoding complexity with better burst erasure protection compared to other recent designs for the burst erasure channel (e.g., LDPC codes). A limitation of the design is its lack of robustness to channels that have impairments other than burst erasures (e.g., additive white Gaussian noise), making its application best suited for correcting data erasures in layers above the physical layer. The efficiency of a burst erasure code is the length of its burst erasure correction capability divided by the theoretical upper limit on this length. The inefficiency is one minus the efficiency. The illustration compares the inefficiency of interleaved RS codes to Quasi-Cyclic (QC) LDPC codes, Euclidean Geometry (EG) LDPC codes, extended Irregular Repeat Accumulate (eIRA) codes, array codes, and random LDPC codes previously proposed for burst erasure

  6. Optimal interference code based on machine learning

    NASA Astrophysics Data System (ADS)

    Qian, Ye; Chen, Qian; Hu, Xiaobo; Cao, Ercong; Qian, Weixian; Gu, Guohua

    2016-10-01

    In this paper, we analyze the characteristics of pseudo-random code, by the case of m sequence. Depending on the description of coding theory, we introduce the jamming methods. We simulate the interference effect or probability model by the means of MATLAB to consolidate. In accordance with the length of decoding time the adversary spends, we find out the optimal formula and optimal coefficients based on machine learning, then we get the new optimal interference code. First, when it comes to the phase of recognition, this study judges the effect of interference by the way of simulating the length of time over the decoding period of laser seeker. Then, we use laser active deception jamming simulate interference process in the tracking phase in the next block. In this study we choose the method of laser active deception jamming. In order to improve the performance of the interference, this paper simulates the model by MATLAB software. We find out the least number of pulse intervals which must be received, then we can make the conclusion that the precise interval number of the laser pointer for m sequence encoding. In order to find the shortest space, we make the choice of the greatest common divisor method. Then, combining with the coding regularity that has been found before, we restore pulse interval of pseudo-random code, which has been already received. Finally, we can control the time period of laser interference, get the optimal interference code, and also increase the probability of interference as well.

  7. Optimal patch code design via device characterization

    NASA Astrophysics Data System (ADS)

    Wu, Wencheng; Dalal, Edul N.

    2012-01-01

    In many color measurement applications, such as those for color calibration and profiling, "patch code" has been used successfully for job identification and automation to reduce operator errors. A patch code is similar to a barcode, but is intended primarily for use in measurement devices that cannot read barcodes due to limited spatial resolution, such as spectrophotometers. There is an inherent tradeoff between decoding robustness and the number of code levels available for encoding. Previous methods have attempted to address this tradeoff, but those solutions have been sub-optimal. In this paper, we propose a method to design optimal patch codes via device characterization. The tradeoff between decoding robustness and the number of available code levels is optimized in terms of printing and measurement efforts, and decoding robustness against noises from the printing and measurement devices. Effort is drastically reduced relative to previous methods because print-and-measure is minimized through modeling and the use of existing printer profiles. Decoding robustness is improved by distributing the code levels in CIE Lab space rather than in CMYK space.

  8. Cross-code comparisons of mixing during the implosion of dense cylindrical and spherical shells

    NASA Astrophysics Data System (ADS)

    Joggerst, C. C.; Nelson, Anthony; Woodward, Paul; Lovekin, Catherine; Masser, Thomas; Fryer, Chris L.; Ramaprabhu, P.; Francois, Marianne; Rockefeller, Gabriel

    2014-10-01

    We present simulations of the implosion of a dense shell in two-dimensional (2D) spherical and cylindrical geometry performed with four different compressible, Eulerian codes: RAGE, FLASH, CASTRO, and PPM. We follow the growth of instabilities on the inner face of the dense shell. Three codes employed Cartesian grid geometry, and one (FLASH) employed polar grid geometry. While the codes are similar, they employ different advection algorithms, limiters, adaptive mesh refinement (AMR) schemes, and interface-preservation techniques. We find that the growth rate of the instability is largely insensitive to the choice of grid geometry or other implementation details specific to an individual code, provided the grid resolution is sufficiently fine. Overall, all simulations from different codes compare very well on the fine grids for which we tested them, though they show slight differences in small-scale mixing. Simulations produced by codes that explicitly limit numerical diffusion show a smaller amount of small-scale mixing than codes that do not. This difference is most prominent for low-mode perturbations where little instability finger interaction takes place, and less prominent for high- or multi-mode simulations where a great deal of interaction takes place, though it is still present. We present RAGE and FLASH simulations to quantify the initial perturbation amplitude to wavelength ratio at which metrics of mixing agree across codes, and find that bubble/spike amplitudes are converged for low-mode and high-mode simulations in which the perturbation amplitude is more than 1% and 5% of the wavelength of the perturbation, respectively. Other metrics of small-scale mixing depend on details of multi-fluid advection and do not converge between codes for the resolutions that were accessible.

  9. Efficient simultaneous dense coding and teleportation with two-photon four-qubit cluster states

    NASA Astrophysics Data System (ADS)

    Zhang, Cai; Situ, Haozhen; Li, Qin; He, Guang Ping

    2016-08-01

    We firstly propose a simultaneous dense coding protocol with two-photon four-qubit cluster states in which two receivers can simultaneously get their respective classical information sent by a sender. Because each photon has two degrees of freedom, the protocol will achieve a high transmittance. The security of the simultaneous dense coding protocol has also been analyzed. Secondly, we investigate how to simultaneously teleport two different quantum states with polarization and path degree of freedom using cluster states to two receivers, respectively, and discuss its security. The preparation and transmission of two-photon four-qubit cluster states is less difficult than that of four-photon entangled states, and it has been experimentally generated with nearly perfect fidelity and high generation rate. Thus, our protocols are feasible with current quantum techniques.

  10. Development and Benchmarking of a Hybrid PIC Code For Dense Plasmas and Fast Ignition

    SciTech Connect

    Witherspoon, F. Douglas; Welch, Dale R.; Thompson, John R.; MacFarlane, Joeseph J.; Phillips, Michael W.; Bruner, Nicki; Mostrom, Chris; Thoma, Carsten; Clark, R. E.; Bogatu, Nick; Kim, Jin-Soo; Galkin, Sergei; Golovkin, Igor E.; Woodruff, P. R.; Wu, Linchun; Messer, Sarah J.

    2014-05-20

    Radiation processes play an important role in the study of both fast ignition and other inertial confinement schemes, such as plasma jet driven magneto-inertial fusion, both in their effect on energy balance, and in generating diagnostic signals. In the latter case, warm and hot dense matter may be produced by the convergence of a plasma shell formed by the merging of an assembly of high Mach number plasma jets. This innovative approach has the potential advantage of creating matter of high energy densities in voluminous amount compared with high power lasers or particle beams. An important application of this technology is as a plasma liner for the flux compression of magnetized plasma to create ultra-high magnetic fields and burning plasmas. HyperV Technologies Corp. has been developing plasma jet accelerator technology in both coaxial and linear railgun geometries to produce plasma jets of sufficient mass, density, and velocity to create such imploding plasma liners. An enabling tool for the development of this technology is the ability to model the plasma dynamics, not only in the accelerators themselves, but also in the resulting magnetized target plasma and within the merging/interacting plasma jets during transport to the target. Welch pioneered numerical modeling of such plasmas (including for fast ignition) using the LSP simulation code. Lsp is an electromagnetic, parallelized, plasma simulation code under development since 1995. It has a number of innovative features making it uniquely suitable for modeling high energy density plasmas including a hybrid fluid model for electrons that allows electrons in dense plasmas to be modeled with a kinetic or fluid treatment as appropriate. In addition to in-house use at Voss Scientific, several groups carrying out research in Fast Ignition (LLNL, SNL, UCSD, AWE (UK), and Imperial College (UK)) also use LSP. A collaborative team consisting of HyperV Technologies Corp., Voss Scientific LLC, FAR-TECH, Inc., Prism

  11. Code Differentiation for Hydrodynamic Model Optimization

    SciTech Connect

    Henninger, R.J.; Maudlin, P.J.

    1999-06-27

    Use of a hydrodynamics code for experimental data fitting purposes (an optimization problem) requires information about how a computed result changes when the model parameters change. These so-called sensitivities provide the gradient that determines the search direction for modifying the parameters to find an optimal result. Here, the authors apply code-based automatic differentiation (AD) techniques applied in the forward and adjoint modes to two problems with 12 parameters to obtain these gradients and compare the computational efficiency and accuracy of the various methods. They fit the pressure trace from a one-dimensional flyer-plate experiment and examine the accuracy for a two-dimensional jet-formation problem. For the flyer-plate experiment, the adjoint mode requires similar or less computer time than the forward methods. Additional parameters will not change the adjoint mode run time appreciably, which is a distinct advantage for this method. Obtaining ''accurate'' sensitivities for the j et problem parameters remains problematic.

  12. Rate-distortion optimized adaptive transform coding

    NASA Astrophysics Data System (ADS)

    Lim, Sung-Chang; Kim, Dae-Yeon; Jeong, Seyoon; Choi, Jin Soo; Choi, Haechul; Lee, Yung-Lyul

    2009-08-01

    We propose a rate-distortion optimized transform coding method that adaptively employs either integer cosine transform that is an integer-approximated version of discrete cosine transform (DCT) or integer sine transform (IST) in a rate-distortion sense. The DCT that has been adopted in most video-coding standards is known as a suboptimal substitute for the Karhunen-Loève transform. However, according to the correlation of a signal, an alternative transform can achieve higher coding efficiency. We introduce a discrete sine transform (DST) that achieves the high-energy compactness in a correlation coefficient range of -0.5 to 0.5 and is applied to the current design of H.264/AVC (advanced video coding). Moreover, to avoid the encoder and decoder mismatch and make the implementation simple, an IST that is an integer-approximated version of the DST is developed. The experimental results show that the proposed method achieves a Bjøntegaard Delta-RATE gain up to 5.49% compared to Joint model 11.0.

  13. Optimizing Extender Code for NCSX Analyses

    SciTech Connect

    M. Richman, S. Ethier, and N. Pomphrey

    2008-01-22

    Extender is a parallel C++ code for calculating the magnetic field in the vacuum region of a stellarator. The code was optimized for speed and augmented with tools to maintain a specialized NetCDF database. Two parallel algorithms were examined. An even-block work-distribution scheme was comparable in performance to a master-slave scheme. Large speedup factors were achieved by representing the plasma surface with a spline rather than Fourier series. The accuracy of this representation and the resulting calculations relied on the density of the spline mesh. The Fortran 90 module db access was written to make it easy to store Extender output in a manageable database. New or updated data can be added to existing databases. A generalized PBS job script handles the generation of a database from scratch

  14. Some optimal partial-unit-memory codes. [time-invariant binary convolutional codes

    NASA Technical Reports Server (NTRS)

    Lauer, G. S.

    1979-01-01

    A class of time-invariant binary convolutional codes is defined, called partial-unit-memory codes. These codes are optimal in the sense of having maximum free distance for given values of R, k (the number of encoder inputs), and mu (the number of encoder memory cells). Optimal codes are given for rates R = 1/4, 1/3, 1/2, and 2/3, with mu not greater than 4 and k not greater than mu + 3, whenever such a code is better than previously known codes. An infinite class of optimal partial-unit-memory codes is also constructed based on equidistant block codes.

  15. Optimal zone coding using the slant transform

    SciTech Connect

    Zadiraka, V.K.; Evtushenko, V.N.

    1995-03-01

    Discrete orthogonal transforms (DOTs) are widely used in digital signal processing, image coding and compression, systems theory, communication, and control. A special representative of the class of DOTs with nonsinusoidal basis functions is the slant transform, which is distinguished by the presence of a slanted vector with linearly decreasing components in its basis. The slant transform of fourth and eighth orders was introduced in 1971 by Enomoto and Shibata especially for efficient representation of the video signal in line sections with smooth variation of brightness. It has been used for television image coding. Pratt, Chen, and Welch generalized the slant transform to vectors of any dimension N = 2{sup n} and two-dimensional arrays, and derived posterior estimates of reconstruction error with zonal image compression (the zones were chosen by trial and error) for various transforms. These estimates show that, for the same N and the same compression ratio {tau}, the slant transform is inferior to the Karhunen - Loeve transform and superior to Walsh and Fourier transforms. In this paper, we derive prior estimates of the reconstruction error for the slant transform in zone coding and suggest an optimal technique for zone selection.

  16. New optimal asymmetric quantum codes from constacyclic codes

    NASA Astrophysics Data System (ADS)

    Zhang, Guanghui; Chen, Bocong; Li, Liangchen

    2014-06-01

    In this paper, we construct two classes of asymmetric quantum codes by using constacyclic codes. The first class is the asymmetric quantum codes with parameters [[q2 + 1, q2 + 1 - 2(t + k + 1), (2k + 2)/(2t + 2)

  17. Optimization Principles for the Neural Code

    NASA Astrophysics Data System (ADS)

    Deweese, Michael Robert

    1995-01-01

    Animals receive information from the world in the form of continuous functions of time. At a very early stage in processing, however, these continuous signals are converted into discrete sequences of identical "spikes". All information that the brain receives about the outside world is encoded in the arrival times of these spikes. The goal of this thesis is to determine if there is a universal principle at work in this neural code. We are motivated by several recent experiments on a wide range of sensory systems which share four main features: High information rates, moderate signal to noise ratio, efficient use of the spike train entropy to encode the signal, and the ability to extract nearly all the information encoded in the spike train with a linear response function triggered by the spikes. We propose that these features can be understood in terms of codes "designed" to maximize information flow. To test this idea, we use the fact that any point process encoding of an analog signal embedded in noise can be written in the language of a threshold crossing model to develop a systematic expansion for the transmitted information about the Poisson limit--the limit where there are no correlations between the spikes. All codes take the same simple form in the Poisson limit, and all of the seemingly unrelated features of the data arise naturally when we optimize a simple linear filtered threshold crossing model. We make a new prediction: Finding the optimum requires adaptation to the statistical structure of the signal and noise, not just to DC offsets. The only disagreement we find is that real neurons outperform our model in the task it was optimized for--they transmit much more information. We then place an upper bound on the amount of information available from the leading term in the Poisson expansion for any possible encoding, and find that real neurons do exceedingly well even by this standard. We conclude that several important features of the neural code can

  18. GENERAL: Deterministic Quantum Secure Direct Communication with Dense Coding and Continuous Variable Operations

    NASA Astrophysics Data System (ADS)

    Han, Lian-Fang; Chen, Yue-Ming; Yuan, Hao

    2009-04-01

    We propose a deterministic quantum secure direct communication protocol by using dense coding. The two check photon sequences are used to check the securities of the channels between the message sender and the receiver. The continuous variable operations instead of the usual discrete unitary operations are performed on the travel photons so that the security of the present protocol can be enhanced. Therefore some specific attacks such as denial-of-service attack, intercept-measure-resend attack and invisible photon attack can be prevented in ideal quantum channel. In addition, the scheme is still secure in noise channel. Furthurmore, this protocol has the advantage of high capacity and can be realized in the experiment.

  19. Optimality principles for the visual code

    NASA Astrophysics Data System (ADS)

    Pitkow, Xaq

    One way to try to make sense of the complexities of our visual system is to hypothesize that evolution has developed nearly optimal solutions to the problems organisms face in the environment. In this thesis, we study two such principles of optimality for the visual code. In the first half of this dissertation, we consider the principle of decorrelation. Influential theories assert that the center-surround receptive fields of retinal neurons remove spatial correlations present in the visual world. It has been proposed that this decorrelation serves to maximize information transmission to the brain by avoiding transfer of redundant information through optic nerve fibers of limited capacity. While these theories successfully account for several aspects of visual perception, the notion that the outputs of the retina are less correlated than its inputs has never been directly tested at the site of the putative information bottleneck, the optic nerve. We presented visual stimuli with naturalistic image correlations to the salamander retina while recording responses of many retinal ganglion cells using a microelectrode array. The output signals of ganglion cells are indeed decorrelated compared to the visual input, but the receptive fields are only partly responsible. Much of the decorrelation is due to the nonlinear processing by neurons rather than the linear receptive fields. This form of decorrelation dramatically limits information transmission. Instead of improving coding efficiency we show that the nonlinearity is well suited to enable a combinatorial code or to signal robust stimulus features. In the second half of this dissertation, we develop an ideal observer model for the task of discriminating between two small stimuli which move along an unknown retinal trajectory induced by fixational eye movements. The ideal observer is provided with the responses of a model retina and guesses the stimulus identity based on the maximum likelihood rule, which involves sums

  20. Dense codes at high speeds: varying stimulus properties to improve visual speller performance.

    PubMed

    Geuze, Jeroen; Farquhar, Jason D R; Desain, Peter

    2012-02-01

    This paper investigates the effect of varying different stimulus properties on performance of the visual speller. Each of the different stimulus properties has been tested in previous literature and has a known effect on visual speller performance. This paper investigates whether a combination of these types of stimuli can lead to a greater improvement. It describes an experiment aimed at answering the following questions. (i) Does visual speller performance suffer from high stimulus rates? (ii) Does an increase in stimulus rate lead to a lower training time for an online visual speller? (iii) What aspect of the difference in the event related potential to a flash or a flip stimulus causes the increase in accuracy. (iv) Can an error-correcting (dense) stimulus code overcome the reduction in performance associated with decreasing target-to-target intervals? We found that higher stimulus rates can improve the visual speller performance and can lead to less time required to train the system. We also found that a proper stimulus code can overcome the stronger response to rows and columns, but cannot greatly improve speller performance.

  1. New optimal asymmetric quantum codes constructed from constacyclic codes

    NASA Astrophysics Data System (ADS)

    Xu, Gen; Li, Ruihu; Guo, Luobin; Lü, Liangdong

    2017-02-01

    In this paper, we propose the construction of asymmetric quantum codes from two families of constacyclic codes over finite field 𝔽q2 of code length n, where for the first family, q is an odd prime power with the form 4t + 1 (t ≥ 1 is integer) or 4t ‑ 1 (t ≥ 2 is integer) and n1 = q2+1 2; for the second family, q is an odd prime power with the form 10t + 3 or 10t + 7 (t ≥ 0 is integer) and n2 = q2+1 5. As a result, families of new asymmetric quantum codes [[n,k,dz/dx

  2. Performing aggressive code optimization with an ability to rollback changes made by the aggressive optimizations

    DOEpatents

    Gschwind, Michael K

    2013-07-23

    Mechanisms for aggressively optimizing computer code are provided. With these mechanisms, a compiler determines an optimization to apply to a portion of source code and determines if the optimization as applied to the portion of source code will result in unsafe optimized code that introduces a new source of exceptions being generated by the optimized code. In response to a determination that the optimization is an unsafe optimization, the compiler generates an aggressively compiled code version, in which the unsafe optimization is applied, and a conservatively compiled code version in which the unsafe optimization is not applied. The compiler stores both versions and provides them for execution. Mechanisms are provided for switching between these versions during execution in the event of a failure of the aggressively compiled code version. Moreover, predictive mechanisms are provided for predicting whether such a failure is likely.

  3. Construction of a Compact, Low-Inductance, 100 J Dense Plasma Focus for Yield Optimization Studies

    NASA Astrophysics Data System (ADS)

    Cooper, Christopher; Povilus, Alex; Chapman, Steven; Falabella, Steve; Podpaly, Yuri; Shaw, Brian; Liu, Jason; Schmidt, Andrea

    2016-10-01

    A new 100 J mini dense plasma focus (DPF) is constructed to optimize neutron yields for a variety of plasma conditions and anode shapes. The device generates neutrons by leveraging instabilities that occur during a z-pinch in a plasma sheath to accelerate a beam of deuterium ions into a background deuterium gas target. The features that distinguish this miniDPF from previous 100 J devices are a compact, engineered electrode geometry and a low-impedance driver. The driving circuit inductance is minimized by mounting the capacitors close to the back of the anode and cathode < 20 cm away, increasing the breakdown current and yields. The anode can rapidly be changed out to test new designs. The neutron yield and 2D images of the visible light emission are compared to simulations with the hybrid kinetic code LSP which can directly simulate the device and anode designs. Initial studies of the sheath physics and neutron yields for a scaling of discharge voltages and neutral fill pressures are presented. Prepared by LLNL under Contract DE-AC52-07NA27344.

  4. Sparse coding based dense feature representation model for hyperspectral image classification

    NASA Astrophysics Data System (ADS)

    Oguslu, Ender; Zhou, Guoqing; Zheng, Zezhong; Iftekharuddin, Khan; Li, Jiang

    2015-11-01

    We present a sparse coding based dense feature representation model (a preliminary version of the paper was presented at the SPIE Remote Sensing Conference, Dresden, Germany, 2013) for hyperspectral image (HSI) classification. The proposed method learns a new representation for each pixel in HSI through the following four steps: sub-band construction, dictionary learning, encoding, and feature selection. The new representation usually has a very high dimensionality requiring a large amount of computational resources. We applied the l1/lq regularized multiclass logistic regression technique to reduce the size of the new representation. We integrated the method with a linear support vector machine (SVM) and a composite kernels SVM (CKSVM) to discriminate different types of land cover. We evaluated the proposed algorithm on three well-known HSI datasets and compared our method to four recently developed classification methods: SVM, CKSVM, simultaneous orthogonal matching pursuit, and image fusion and recursive filtering. Experimental results show that the proposed method can achieve better overall and average classification accuracies with a much more compact representation leading to more efficient sparse models for HSI classification.

  5. Augmented message-matrix approach to deterministic dense-coding theory

    NASA Astrophysics Data System (ADS)

    Gerjuoy, E.; Williams, H. T.; Bourdon, P. S.

    2009-04-01

    A useful method for deriving analytical results applicable to the standard two-party deterministic dense-coding protocol is introduced and illustrated. In this protocol, communication of K perfectly distinguishable messages is attainable via K selected local unitary operations performed on one qudit from a pair of entangled qudits of equal dimension d in a pure state |ψ⟩ with largest Schmidt coefficient λ0 . The method takes advantage of the fact that the K message states, together with d2-K augmenting orthonormal state vectors, yield a unitary matrix, thereby implying properties of the K message states which otherwise are not readily recognized. Employing this augmented message matrix, we produce simple proofs of previously established results including (i) λ0≤d/K , (ii) λ0

  6. Optimal superdense coding over memory channels

    SciTech Connect

    Shadman, Z.; Kampermann, H.; Bruss, D.; Macchiavello, C.

    2011-10-15

    We study the superdense coding capacity in the presence of quantum channels with correlated noise. We investigate both the cases of unitary and nonunitary encoding. Pauli channels for arbitrary dimensions are treated explicitly. The superdense coding capacity for some special channels and resource states is derived for unitary encoding. We also provide an example of a memory channel where nonunitary encoding leads to an improvement in the superdense coding capacity.

  7. Optimized quantum error-correction codes for experiments

    NASA Astrophysics Data System (ADS)

    Nebendahl, V.

    2015-02-01

    We identify gauge freedoms in quantum error correction (QEC) codes and introduce strategies for optimal control algorithms to find the gauges which allow the easiest experimental realization. Hereby, the optimal gauge depends on the underlying physical system and the available means to manipulate it. The final goal is to obtain optimal decompositions of QEC codes into elementary operations which can be realized with high experimental fidelities. In the first part of this paper, this subject is studied in a general fashion, while in the second part, a system of trapped ions is treated as a concrete example. A detailed optimization algorithm is explained and various decompositions are presented for the three qubit code, the five qubit code, and the seven qubit Steane code.

  8. Analysis of the optimality of the standard genetic code.

    PubMed

    Kumar, Balaji; Saini, Supreet

    2016-07-19

    Many theories have been proposed attempting to explain the origin of the genetic code. While strong reasons remain to believe that the genetic code evolved as a frozen accident, at least for the first few amino acids, other theories remain viable. In this work, we test the optimality of the standard genetic code against approximately 17 million genetic codes, and locate 29 which outperform the standard genetic code at the following three criteria: (a) robustness to point mutation; (b) robustness to frameshift mutation; and (c) ability to encode additional information in the coding region. We use a genetic algorithm to generate and score codes from different parts of the associated landscape, which are, as a result, presumably more representative of the entire landscape. Our results show that while the genetic code is sub-optimal for robustness to frameshift mutation and the ability to encode additional information in the coding region, it is very strongly selected for robustness to point mutation. This coupled with the observation that the different performance indicator scores for a particular genetic code are negatively correlated makes the standard genetic code nearly optimal for the three criteria tested in this work.

  9. Effects of intrinsic decoherence on various correlations and quantum dense coding in a two superconducting charge qubit system

    NASA Astrophysics Data System (ADS)

    Wang, Fei; Maimaitiyiming-Tusun; Parouke-Paerhati; Ahmad-Abliz

    2015-09-01

    The influence of intrinsic decoherence on various correlations and dense coding in a model which consists of two identical superconducting charge qubits coupled by a fixed capacitor is investigated. The results show that, despite the intrinsic decoherence, the correlations as well as the dense coding channel capacity can be effectively increased via the combination of system parameters, i.e., the mutual coupling energy between the two charge qubits is larger than the Josephson energy of the qubit. The bigger the difference between them is, the better the effect is. Project supported by the Project to Develop Outstanding Young Scientific Talents of China (Grant No. 2013711019), the Natural Science Foundation of Xinjiang Province, China (Grant No. 2012211A052), the Foundation for Key Program of Ministry of Education of China (Grant No. 212193), and the Innovative Foundation for Graduate Students Granted by the Key Subjects of Theoretical Physics of Xinjiang Province, China (Grant No. LLWLL201301).

  10. Group Complementary Codes With Optimized Aperiodic Correlation.

    DTIC Science & Technology

    1983-04-01

    efforts have addressed this problem in the past, and several waveform designs have resulted in the potential reduction or elimination of the range ... sidelobe problem. For example, Barker codes (also known as perfect binary words) limit the range sidelobes to a value of 1/N, expressed in the

  11. Optimization of KINETICS Chemical Computation Code

    NASA Technical Reports Server (NTRS)

    Donastorg, Cristina

    2012-01-01

    NASA JPL has been creating a code in FORTRAN called KINETICS to model the chemistry of planetary atmospheres. Recently there has been an effort to introduce Message Passing Interface (MPI) into the code so as to cut down the run time of the program. There has been some implementation of MPI into KINETICS; however, the code could still be more efficient than it currently is. One way to increase efficiency is to send only certain variables to all the processes when an MPI subroutine is called and to gather only certain variables when the subroutine is finished. Therefore, all the variables that are used in three of the main subroutines needed to be investigated. Because of the sheer amount of code that there is to comb through this task was given as a ten-week project. I have been able to create flowcharts outlining the subroutines, common blocks, and functions used within the three main subroutines. From these flowcharts I created tables outlining the variables used in each block and important information about each. All this information will be used to determine how to run MPI in KINETICS in the most efficient way possible.

  12. The information capacity of the genetic code: Is the natural code optimal?

    PubMed

    Kuruoglu, Ercan E; Arndt, Peter F

    2017-04-21

    We envision the molecular evolution process as an information transfer process and provide a quantitative measure for information preservation in terms of the channel capacity according to the channel coding theorem of Shannon. We calculate Information capacities of DNA on the nucleotide (for non-coding DNA) and the amino acid (for coding DNA) level using various substitution models. We extend our results on coding DNA to a discussion about the optimality of the natural codon-amino acid code. We provide the results of an adaptive search algorithm in the code domain and demonstrate the existence of a large number of genetic codes with higher information capacity. Our results support the hypothesis of an ancient extension from a 2-nucleotide codon to the current 3-nucleotide codon code to encode the various amino acids. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Optimal periodic binary codes of lengths 28 to 64

    NASA Technical Reports Server (NTRS)

    Tyler, S.; Keston, R.

    1980-01-01

    Results from computer searches performed to find repeated binary phase coded waveforms with optimal periodic autocorrelation functions are discussed. The best results for lengths 28 to 64 are given. The code features of major concern are where (1) the peak sidelobe in the autocorrelation function is small and (2) the sum of the squares of the sidelobes in the autocorrelation function is small.

  14. Optimizing Nuclear Physics Codes on the XT5

    SciTech Connect

    Hartman-Baker, Rebecca J; Nam, Hai Ah

    2011-01-01

    Scientists studying the structure and behavior of the atomic nucleus require immense high-performance computing resources to gain scientific insights. Several nuclear physics codes are capable of scaling to more than 100,000 cores on Oak Ridge National Laboratory's petaflop Cray XT5 system, Jaguar. In this paper, we present our work on optimizing codes in the nuclear physics domain.

  15. Optimal Subband Coding of Cyclostationary Signals

    DTIC Science & Technology

    2007-11-02

    framework, making the underlying task much simpler. • A common occurrence of cyclostationarity is in Orthogonal Frequency Division Mul- tiplexed ( OFDM ...communications. We have shown that certain channel resource allocation problems for OFDM systems are dual problems of subband coding. We have solved the...optimum resource allocation problem for OFDM in the multiuser en- vironment. Specifically, we have considered in turn a variety of settings culminating

  16. The effect of code expanding optimizations on instruction cache design

    NASA Technical Reports Server (NTRS)

    Chen, William Y.; Chang, Pohua P.; Conte, Thomas M.; Hwu, Wen-Mei W.

    1991-01-01

    It is shown that code expanding optimizations have strong and non-intuitive implications on instruction cache design. Three types of code expanding optimizations are studied: instruction placement, function inline expansion, and superscalar optimizations. Overall, instruction placement reduces the miss ratio of small caches. Function inline expansion improves the performance for small cache sizes, but degrades the performance of medium caches. Superscalar optimizations increases the cache size required for a given miss ratio. On the other hand, they also increase the sequentiality of instruction access so that a simple load-forward scheme effectively cancels the negative effects. Overall, it is shown that with load forwarding, the three types of code expanding optimizations jointly improve the performance of small caches and have little effect on large caches.

  17. Subband Image Coding with Jointly Optimized Quantizers

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Chung, Wilson C.; Smith Mark J. T.

    1995-01-01

    An iterative design algorithm for the joint design of complexity- and entropy-constrained subband quantizers and associated entropy coders is proposed. Unlike conventional subband design algorithms, the proposed algorithm does not require the use of various bit allocation algorithms. Multistage residual quantizers are employed here because they provide greater control of the complexity-performance tradeoffs, and also because they allow efficient and effective high-order statistical modeling. The resulting subband coder exploits statistical dependencies within subbands, across subbands, and across stages, mainly through complexity-constrained high-order entropy coding. Experimental results demonstrate that the complexity-rate-distortion performance of the new subband coder is exceptional.

  18. Optimal neural population coding of an auditory spatial cue.

    PubMed

    Harper, Nicol S; McAlpine, David

    2004-08-05

    A sound, depending on the position of its source, can take more time to reach one ear than the other. This interaural (between the ears) time difference (ITD) provides a major cue for determining the source location. Many auditory neurons are sensitive to ITDs, but the means by which such neurons represent ITD is a contentious issue. Recent studies question whether the classical general model (the Jeffress model) applies across species. Here we show that ITD coding strategies of different species can be explained by a unifying principle: that the ITDs an animal naturally encounters should be coded with maximal accuracy. Using statistical techniques and a stochastic neural model, we demonstrate that the optimal coding strategy for ITD depends critically on head size and sound frequency. For small head sizes and/or low-frequency sounds, the optimal coding strategy tends towards two distinct sub-populations tuned to ITDs outside the range created by the head. This is consistent with recent observations in small mammals. For large head sizes and/or high frequencies, the optimal strategy is a homogeneous distribution of ITD tunings within the range created by the head. This is consistent with observations in the barn owl. For humans, the optimal strategy to code ITDs from an acoustically measured distribution depends on frequency; above 400 Hz a homogeneous distribution is optimal, and below 400 Hz distinct sub-populations are optimal.

  19. Optimal Grouping and Matching for Network-Coded Cooperative Communications

    SciTech Connect

    Sharma, S; Shi, Y; Hou, Y T; Kompella, S; Midkiff, S F

    2011-11-01

    Network-coded cooperative communications (NC-CC) is a new advance in wireless networking that exploits network coding (NC) to improve the performance of cooperative communications (CC). However, there remains very limited understanding of this new hybrid technology, particularly at the link layer and above. This paper fills in this gap by studying a network optimization problem that requires joint optimization of session grouping, relay node grouping, and matching of session/relay groups. After showing that this problem is NP-hard, we present a polynomial time heuristic algorithm to this problem. Using simulation results, we show that our algorithm is highly competitive and can produce near-optimal results.

  20. Optimization of focality and direction in dense electrode array transcranial direct current stimulation (tDCS)

    NASA Astrophysics Data System (ADS)

    Guler, Seyhmus; Dannhauer, Moritz; Erem, Burak; Macleod, Rob; Tucker, Don; Turovets, Sergei; Luu, Phan; Erdogmus, Deniz; Brooks, Dana H.

    2016-06-01

    Objective. Transcranial direct current stimulation (tDCS) aims to alter brain function non-invasively via electrodes placed on the scalp. Conventional tDCS uses two relatively large patch electrodes to deliver electrical current to the brain region of interest (ROI). Recent studies have shown that using dense arrays containing up to 512 smaller electrodes may increase the precision of targeting ROIs. However, this creates a need for methods to determine effective and safe stimulus patterns as the number of degrees of freedom is much higher with such arrays. Several approaches to this problem have appeared in the literature. In this paper, we describe a new method for calculating optimal electrode stimulus patterns for targeted and directional modulation in dense array tDCS which differs in some important aspects with methods reported to date. Approach. We optimize stimulus pattern of dense arrays with fixed electrode placement to maximize the current density in a particular direction in the ROI. We impose a flexible set of safety constraints on the current power in the brain, individual electrode currents, and total injected current, to protect subject safety. The proposed optimization problem is convex and thus efficiently solved using existing optimization software to find unique and globally optimal electrode stimulus patterns. Main results. Solutions for four anatomical ROIs based on a realistic head model are shown as exemplary results. To illustrate the differences between our approach and previously introduced methods, we compare our method with two of the other leading methods in the literature. We also report on extensive simulations that show the effect of the values chosen for each proposed safety constraint bound on the optimized stimulus patterns. Significance. The proposed optimization approach employs volume based ROIs, easily adapts to different sets of safety constraints, and takes negligible time to compute. An in-depth comparison study gives

  1. Optimization of focality and direction in dense electrode array transcranial direct current stimulation (tDCS)

    PubMed Central

    Guler, Seyhmus; Dannhauer, Moritz; Erem, Burak; Macleod, Rob; Tucker, Don; Turovets, Sergei; Luu, Phan; Erdogmus, Deniz; Brooks, Dana H.

    2016-01-01

    Objective Transcranial direct current stimulation (tDCS) aims to alter brain function noninvasively via electrodes placed on the scalp. Conventional tDCS uses two relatively large patch electrodes to deliver electrical currents to the brain region of interest (ROI). Recent studies have shown that using dense arrays containing up to 512 smaller electrodes may increase the precision of targeting ROIs. However, this creates a need for methods to determine effective and safe stimulus patterns as the degrees of freedom is much higher with such arrays. Several approaches to this problem have appeared in the literature. In this paper, we describe a new method for calculating optimal electrode stimulus pattern for targeted and directional modulation in dense array tDCS which differs in some important aspects with methods reported to date. Approach We optimize stimulus pattern of dense arrays with fixed electrode placement to maximize the current density in a particular direction in the ROI. We impose a flexible set of safety constraints on the current power in the brain, individual electrode currents, and total injected current, to protect subject safety. The proposed optimization problem is convex and thus efficiently solved using existing optimization software to find unique and globally optimal electrode stimulus patterns. Main results Solutions for four anatomical ROIs based on a realistic head model are shown as exemplary results. To illustrate the differences between our approach and previously introduced methods, we compare our method with two of the other leading methods in the literature. We also report on extensive simulations that show the effect of the values chosen for each proposed safety constraint bound on the optimized stimulus patterns. Significance The proposed optimization approach employs volume based ROIs, easily adapts to different sets of safety constraints, and takes negligible time to compute. In-depth comparison study gives insight into the

  2. Code aperture optimization for spectrally agile compressive imaging.

    PubMed

    Arguello, Henry; Arce, Gonzalo R

    2011-11-01

    Coded aperture snapshot spectral imaging (CASSI) provides a mechanism for capturing a 3D spectral cube with a single shot 2D measurement. In many applications selective spectral imaging is sought since relevant information often lies within a subset of spectral bands. Capturing and reconstructing all the spectral bands in the observed image cube, to then throw away a large portion of this data, is inefficient. To this end, this paper extends the concept of CASSI to a system admitting multiple shot measurements, which leads not only to higher quality of reconstruction but also to spectrally selective imaging when the sequence of code aperture patterns is optimized. The aperture code optimization problem is shown to be analogous to the optimization of a constrained multichannel filter bank. The optimal code apertures allow the decomposition of the CASSI measurement into several subsets, each having information from only a few selected spectral bands. The rich theory of compressive sensing is used to effectively reconstruct the spectral bands of interest from the measurements. A number of simulations are developed to illustrate the spectral imaging characteristics attained by optimal aperture codes.

  3. TRO-2D - A code for rational transonic aerodynamic optimization

    NASA Technical Reports Server (NTRS)

    Davis, W. H., Jr.

    1985-01-01

    Features and sample applications of the transonic rational optimization (TRO-2D) code are outlined. TRO-2D includes the airfoil analysis code FLO-36, the CONMIN optimization code and a rational approach to defining aero-function shapes for geometry modification. The program is part of an effort to develop an aerodynamically smart optimizer that will simplify and shorten the design process. The user has a selection of drag minimization and associated minimum lift, moment, and the pressure distribution, a choice among 14 resident aero-function shapes, and options on aerodynamic and geometric constraints. Design variables such as the angle of attack, leading edge radius and camber, shock strength and movement, supersonic pressure plateau control, etc., are discussed. The results of calculations of a reduced leading edge camber transonic airfoil and an airfoil with a natural laminar flow are provided, showing that only four design variables need be specified to obtain satisfactory results.

  4. State injection, lattice surgery, and dense packing of the deformation-based surface code

    NASA Astrophysics Data System (ADS)

    Nagayama, Shota; Satoh, Takahiko; Van Meter, Rodney

    2017-01-01

    Resource consumption of the conventional surface code is expensive, in part due to the need to separate the defects that create the logical qubit far apart on the physical qubit lattice. We propose that instantiating the deformation-based surface code using superstabilizers will make it possible to detect short error chains connecting the superstabilizers, allowing us to place logical qubits close together. Additionally, we demonstrate the process of conversion from the defect-based surface code, which works as arbitrary state injection, and a lattice-surgery-like controlled not (cnot) gate implementation that requires fewer physical qubits than the braiding cnot gate. Finally, we propose a placement design for the deformation-based surface code and analyze its resource consumption; large-scale quantum computation requires 25/d2+170 d +289 4 physical qubits per logical qubit, where d is the code distance of the standard surface code, whereas the planar code requires 16 d2-16 d +4 physical qubits per logical qubit, for a reduction of about 50%.

  5. A systematic method of interconnection optimization for dense-array concentrator photovoltaic system.

    PubMed

    Siaw, Fei-Lu; Chong, Kok-Keong

    2013-01-01

    This paper presents a new systematic approach to analyze all possible array configurations in order to determine the most optimal dense-array configuration for concentrator photovoltaic (CPV) systems. The proposed method is fast, simple, reasonably accurate, and very useful as a preliminary study before constructing a dense-array CPV panel. Using measured flux distribution data, each CPV cells' voltage and current values at three critical points which are at short-circuit, open-circuit, and maximum power point are determined. From there, an algorithm groups the cells into basic modules. The next step is I-V curve prediction, to find the maximum output power of each array configuration. As a case study, twenty different I-V predictions are made for a prototype of nonimaging planar concentrator, and the array configuration that yields the highest output power is determined. The result is then verified by assembling and testing of an actual dense-array on the prototype. It was found that the I-V curve closely resembles simulated I-V prediction, and measured maximum output power varies by only 1.34%.

  6. A Systematic Method of Interconnection Optimization for Dense-Array Concentrator Photovoltaic System

    PubMed Central

    Siaw, Fei-Lu

    2013-01-01

    This paper presents a new systematic approach to analyze all possible array configurations in order to determine the most optimal dense-array configuration for concentrator photovoltaic (CPV) systems. The proposed method is fast, simple, reasonably accurate, and very useful as a preliminary study before constructing a dense-array CPV panel. Using measured flux distribution data, each CPV cells' voltage and current values at three critical points which are at short-circuit, open-circuit, and maximum power point are determined. From there, an algorithm groups the cells into basic modules. The next step is I-V curve prediction, to find the maximum output power of each array configuration. As a case study, twenty different I-V predictions are made for a prototype of nonimaging planar concentrator, and the array configuration that yields the highest output power is determined. The result is then verified by assembling and testing of an actual dense-array on the prototype. It was found that the I-V curve closely resembles simulated I-V prediction, and measured maximum output power varies by only 1.34%. PMID:24453823

  7. A realistic model under which the genetic code is optimal.

    PubMed

    Buhrman, Harry; van der Gulik, Peter T S; Klau, Gunnar W; Schaffner, Christian; Speijer, Dave; Stougie, Leen

    2013-10-01

    The genetic code has a high level of error robustness. Using values of hydrophobicity scales as a proxy for amino acid character, and the mean square measure as a function quantifying error robustness, a value can be obtained for a genetic code which reflects the error robustness of that code. By comparing this value with a distribution of values belonging to codes generated by random permutations of amino acid assignments, the level of error robustness of a genetic code can be quantified. We present a calculation in which the standard genetic code is shown to be optimal. We obtain this result by (1) using recently updated values of polar requirement as input; (2) fixing seven assignments (Ile, Trp, His, Phe, Tyr, Arg, and Leu) based on aptamer considerations; and (3) using known biosynthetic relations of the 20 amino acids. This last point is reflected in an approach of subdivision (restricting the random reallocation of assignments to amino acid subgroups, the set of 20 being divided in four such subgroups). The three approaches to explain robustness of the code (specific selection for robustness, amino acid-RNA interactions leading to assignments, or a slow growth process of assignment patterns) are reexamined in light of our findings. We offer a comprehensive hypothesis, stressing the importance of biosynthetic relations, with the code evolving from an early stage with just glycine and alanine, via intermediate stages, towards 64 codons carrying todays meaning.

  8. The optimal code searching method with an improved criterion of coded exposure for remote sensing image restoration

    NASA Astrophysics Data System (ADS)

    He, Lirong; Cui, Guangmang; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting

    2015-03-01

    Coded exposure photography makes the motion de-blurring a well-posed problem. The integration pattern of light is modulated using the method of coded exposure by opening and closing the shutter within the exposure time, changing the traditional shutter frequency spectrum into a wider frequency band in order to preserve more image information in frequency domain. The searching method of optimal code is significant for coded exposure. In this paper, an improved criterion of the optimal code searching is proposed by analyzing relationship between code length and the number of ones in the code, considering the noise effect on code selection with the affine noise model. Then the optimal code is obtained utilizing the method of genetic searching algorithm based on the proposed selection criterion. Experimental results show that the time consuming of searching optimal code decreases with the presented method. The restoration image is obtained with better subjective experience and superior objective evaluation values.

  9. A Fast Optimization Method for General Binary Code Learning.

    PubMed

    Shen, Fumin; Zhou, Xiang; Yang, Yang; Song, Jingkuan; Shen, Heng; Tao, Dacheng

    2016-09-22

    Hashing or binary code learning has been recognized to accomplish efficient near neighbor search, and has thus attracted broad interests in recent retrieval, vision and learning studies. One main challenge of learning to hash arises from the involvement of discrete variables in binary code optimization. While the widely-used continuous relaxation may achieve high learning efficiency, the pursued codes are typically less effective due to accumulated quantization error. In this work, we propose a novel binary code optimization method, dubbed Discrete Proximal Linearized Minimization (DPLM), which directly handles the discrete constraints during the learning process. Specifically, the discrete (thus nonsmooth nonconvex) problem is reformulated as minimizing the sum of a smooth loss term with a nonsmooth indicator function. The obtained problem is then efficiently solved by an iterative procedure with each iteration admitting an analytical discrete solution, which is thus shown to converge very fast. In addition, the proposed method supports a large family of empirical loss functions, which is particularly instantiated in this work by both a supervised and an unsupervised hashing losses, together with the bits uncorrelation and balance constraints. In particular, the proposed DPLM with a supervised `2 loss encodes the whole NUS-WIDE database into 64-bit binary codes within 10 seconds on a standard desktop computer. The proposed approach is extensively evaluated on several large-scale datasets and the generated binary codes are shown to achieve very promising results on both retrieval and classification tasks.

  10. Multiview coding mode decision with hybrid optimal stopping model.

    PubMed

    Zhao, Tiesong; Kwong, Sam; Wang, Hanli; Wang, Zhou; Pan, Zhaoqing; Kuo, C-C Jay

    2013-04-01

    In a generic decision process, optimal stopping theory aims to achieve a good tradeoff between decision performance and time consumed, with the advantages of theoretical decision-making and predictable decision performance. In this paper, optimal stopping theory is employed to develop an effective hybrid model for the mode decision problem, which aims to theoretically achieve a good tradeoff between the two interrelated measurements in mode decision, as computational complexity reduction and rate-distortion degradation. The proposed hybrid model is implemented and examined with a multiview encoder. To support the model and further promote coding performance, the multiview coding mode characteristics, including predicted mode probability and estimated coding time, are jointly investigated with inter-view correlations. Exhaustive experimental results with a wide range of video resolutions reveal the efficiency and robustness of our method, with high decision accuracy, negligible computational overhead, and almost intact rate-distortion performance compared to the original encoder.

  11. Optimal and efficient decoding of concatenated quantum block codes

    SciTech Connect

    Poulin, David

    2006-11-15

    We consider the problem of optimally decoding a quantum error correction code--that is, to find the optimal recovery procedure given the outcomes of partial ''check'' measurements on the system. In general, this problem is NP hard. However, we demonstrate that for concatenated block codes, the optimal decoding can be efficiently computed using a message-passing algorithm. We compare the performance of the message-passing algorithm to that of the widespread blockwise hard decoding technique. Our Monte Carlo results using the five-qubit and Steane's code on a depolarizing channel demonstrate significant advantages of the message-passing algorithms in two respects: (i) Optimal decoding increases by as much as 94% the error threshold below which the error correction procedure can be used to reliably send information over a noisy channel; and (ii) for noise levels below these thresholds, the probability of error after optimal decoding is suppressed at a significantly higher rate, leading to a substantial reduction of the error correction overhead.

  12. Performance optimization of dense-array concentrator photovoltaic system considering effects of circumsolar radiation and slope error.

    PubMed

    Wong, Chee-Woon; Chong, Kok-Keong; Tan, Ming-Hui

    2015-07-27

    This paper presents an approach to optimize the electrical performance of dense-array concentrator photovoltaic system comprised of non-imaging dish concentrator by considering the circumsolar radiation and slope error effects. Based on the simulated flux distribution, a systematic methodology to optimize the layout configuration of solar cells interconnection circuit in dense array concentrator photovoltaic module has been proposed by minimizing the current mismatch caused by non-uniformity of concentrated sunlight. An optimized layout of interconnection solar cells circuit with minimum electrical power loss of 6.5% can be achieved by minimizing the effects of both circumsolar radiation and slope error.

  13. Optimal block cosine transform image coding for noisy channels

    NASA Technical Reports Server (NTRS)

    Vaishampayan, V.; Farvardin, N.

    1986-01-01

    The two dimensional block transform coding scheme based on the discrete cosine transform was studied extensively for image coding applications. While this scheme has proven to be efficient in the absence of channel errors, its performance degrades rapidly over noisy channels. A method is presented for the joint source channel coding optimization of a scheme based on the 2-D block cosine transform when the output of the encoder is to be transmitted via a memoryless design of the quantizers used for encoding the transform coefficients. This algorithm produces a set of locally optimum quantizers and the corresponding binary code assignment for the assumed transform coefficient statistics. To determine the optimum bit assignment among the transform coefficients, an algorithm was used based on the steepest descent method, which under certain convexity conditions on the performance of the channel optimized quantizers, yields the optimal bit allocation. Comprehensive simulation results for the performance of this locally optimum system over noisy channels were obtained and appropriate comparisons against a reference system designed for no channel error were rendered.

  14. PEB bake optimization for process window improvement of mixed iso-dense pattern

    NASA Astrophysics Data System (ADS)

    Liau, C. Y.; Lee, C. H.; Kang, J. T.; Yoon, S. W.; Loo, Christopher; Seow, Bertrand; Sheu, W. B.

    2005-08-01

    We have shown that process effects induced by extending the post-exposure bake temperature in the process flow of chemically amplified photoresists can lead to significant improvements in depth-of-focus (DOF) and exposure latitude (EL) and small geometry printing capability. Due to improved acid dose contrasts and a balanced optimization of acid diffusion in the presence of quencher, PEB temperature increase has enabled the printing of iso and semi-dense space of 0.2 µm and below with a large DOF, using binary masks and 248 nm lithography without expensing the iso-dense bias. The results and findings of a full patterning process in a device flow, with different PEB temperatures as a process enhancement, are presented. The main objective of this study is to demonstrate how, using KrF lithography with binary masks and no optical proximity correction (OPC) nor other reticle enhancement technique (RET), the process latitude can be improved. Lithographic process latitudes, intra-field critical dimension (CD) uniformity and resist profiles of different PEB processes are evaluated. Then, the after-etch profiles are also investigated to ensure the feasibility of this technique.

  15. Highly optimized tolerance and power laws in dense and sparse resource regimes.

    PubMed

    Manning, M; Carlson, J M; Doyle, J

    2005-07-01

    Power law cumulative frequency (P) versus event size (l) distributions P > or =l) approximately l(-alpha) are frequently cited as evidence for complexity and serve as a starting point for linking theoretical models and mechanisms with observed data. Systems exhibiting this behavior present fundamental mathematical challenges in probability and statistics. The broad span of length and time scales associated with heavy tailed processes often require special sensitivity to distinctions between discrete and continuous phenomena. A discrete highly optimized tolerance (HOT) model, referred to as the probability, loss, resource (PLR) model, gives the exponent alpha=1/d as a function of the dimension d of the underlying substrate in the sparse resource regime. This agrees well with data for wildfires, web file sizes, and electric power outages. However, another HOT model, based on a continuous (dense) distribution of resources, predicts alpha=1+1/d . In this paper we describe and analyze a third model, the cuts model, which exhibits both behaviors but in different regimes. We use the cuts model to show all three models agree in the dense resource limit. In the sparse resource regime, the continuum model breaks down, but in this case, the cuts and PLR models are described by the same exponent.

  16. Optimization of microbial inactivation of shrimp by dense phase carbon dioxide.

    PubMed

    Ji, Hongwu; Zhang, Liang; Liu, Shucheng; Qu, Xiaojuan; Zhang, Chaohua; Gao, Jialong

    2012-05-01

    Microbial inactivation of Litopenaeus vannamei by dense phase carbon dioxide (DPCD) treatment was investigated and neural network was used to optimize the process parameters of microbial inactivation. The results showed that DPCD treatment had a remarkable bactericidal effect on microorganism of shrimp. A 3×5×2 three-layer neural network model was established. According to the neural network model, the inactivation effect was enhanced with pressure, temperature and exposure time increasing and temperature was the most important factor affecting microbial inactivation of shrimp. Cooked appearance of shrimp by DPCD treatment was observed and seemed to be more positively acceptable by Chinese diet custom. Therefore, color change of shrimp by DPCD treatment could have a positive effect on quality attributes. Moderate temperature 55 °C with 15 MPa for 26 min treatment time achieved a 3.5-log reduction of total aerobic plate counts (TPC). The parameters combination might be appropriate for shrimp process by DPCD.

  17. Optimized design and research of secondary microprism for dense array concentrating photovoltaic module

    NASA Astrophysics Data System (ADS)

    Yang, Guanghui; Chen, Bingzhen; Liu, Youqiang; Guo, Limin; Yao, Shun; Wang, Zhiyong

    2015-10-01

    As the critical component of concentrating photovoltaic module, secondary concentrators can be effective in increasing the acceptance angle and incident light, as well as improving the energy uniformity of focal spots. This paper presents a design of transmission-type secondary microprism for dense array concentrating photovoltaic module. The 3-D model of this design is established by Solidworks and important parameters such as inclination angle and component height are optimized using Zemax. According to the design and simulation results, several secondary microprisms with different parameters are fabricated and tested in combination with Fresnel lens and multi-junction solar cell. The sun-simulator IV test results show that the combination has the highest output power when secondary microprism height is 5mm and top facet side length is 7mm. Compared with the case without secondary microprism, the output power can improve 11% after the employment of secondary microprisms, indicating the indispensability of secondary microprisms in concentrating photovoltaic module.

  18. Source mask optimization using real-coded genetic algorithms

    NASA Astrophysics Data System (ADS)

    Yang, Chaoxing; Wang, Xiangzhao; Li, Sikun; Erdmann, Andreas

    2013-04-01

    Source mask optimization (SMO) is considered to be one of the technologies to push conventional 193nm lithography to its ultimate limits. In comparison with other SMO methods that use an inverse problem formulation, SMO based on genetic algorithm (GA) requires very little knowledge of the process, and has the advantage of flexible problem formulation. Recent publications on SMO using a GA employ a binary-coded GA. In general, the performance of a GA depends not only on the merit or fitness function, but also on the parameters, operators and their algorithmic implementation. In this paper, we propose a SMO method using real-coded GA where the source and mask solutions are represented by floating point strings instead of bit strings. Besides from that, the selection, crossover, and mutation operators are replaced by corresponding floating-point versions. Both binary-coded and real-coded genetic algorithms were implemented in two versions of SMO and compared in numerical experiments, where the target patterns are staggered contact holes and a logic pattern with critical dimensions of 100 nm, respectively. The results demonstrate the performance improvement of the real-coded GA in comparison to the binary-coded version. Specifically, these improvements can be seen in a better convergence behavior. For example, the numerical experiments for the logic pattern showed that the average number of generations to converge to a proper fitness of 6.0 using the real-coded method is 61.8% (100 generations) less than that using binary-coded method.

  19. Optimal control of coupled PDE networks with automated code generation

    NASA Astrophysics Data System (ADS)

    Papadopoulos, D.

    2012-09-01

    The purpose of this work is to present a framework for the optimal control of coupled PDE networks. A coupled PDE network is a system of partial differential equations coupled together. Such systems can be represented as a directed graph. A domain specific language (DSL)—an extension of the DOT language—is used for the description of such a coupled PDE network. The adjoint equations and the gradient, required for its optimal control, are computed with the help of a computer algebra system (CAS). Automated code generation techniques have been used for the generation of the PDE systems of both the direct and the adjoint equations. Both the direct and adjoint equations are solved with the standard finite element method. Finally, for the numerical optimization of the system standard optimization techniques are used such as BFGS and Newton conjugate gradient.

  20. Genomic context analysis reveals dense interaction network between vertebrate ultraconserved non-coding elements

    PubMed Central

    Dimitrieva, Slavica; Bucher, Philipp

    2012-01-01

    Motivation: Genomic context analysis, also known as phylogenetic profiling, is widely used to infer functional interactions between proteins but rarely applied to non-coding cis-regulatory DNA elements. We were wondering whether this approach could provide insights about utlraconserved non-coding elements (UCNEs). These elements are organized as large clusters, so-called gene regulatory blocks (GRBs) around key developmental genes. Their molecular functions and the reasons for their high degree of conservation remain enigmatic. Results: In a special setting of genomic context analysis, we analyzed the fate of GRBs after a whole-genome duplication event in five fish genomes. We found that in most cases all UCNEs were retained together as a single block, whereas the corresponding target genes were often retained in two copies, one completely devoid of UCNEs. This ‘winner-takes-all’ pattern suggests that UCNEs of a GRB function in a highly cooperative manner. We propose that the multitude of interactions between UCNEs is the reason for their extreme sequence conservation. Supplementary information: Supplementary data are available at Bioinformatics online and at http://ccg.vital-it.ch/ucne/ PMID:22962458

  1. Modeling Warm Dense Matter Experiments using the 3D ALE-AMR Code and the Move Toward Exascale Computing

    SciTech Connect

    Koniges, A; Eder, E; Liu, W; Barnard, J; Friedman, A; Logan, G; Fisher, A; Masers, N; Bertozzi, A

    2011-11-04

    The Neutralized Drift Compression Experiment II (NDCX II) is an induction accelerator planned for initial commissioning in 2012. The final design calls for a 3 MeV, Li+ ion beam, delivered in a bunch with characteristic pulse duration of 1 ns, and transverse dimension of order 1 mm. The NDCX II will be used in studies of material in the warm dense matter (WDM) regime, and ion beam/hydrodynamic coupling experiments relevant to heavy ion based inertial fusion energy. We discuss recent efforts to adapt the 3D ALE-AMR code to model WDM experiments on NDCX II. The code, which combines Arbitrary Lagrangian Eulerian (ALE) hydrodynamics with Adaptive Mesh Refinement (AMR), has physics models that include ion deposition, radiation hydrodynamics, thermal diffusion, anisotropic material strength with material time history, and advanced models for fragmentation. Experiments at NDCX-II will explore the process of bubble and droplet formation (two-phase expansion) of superheated metal solids using ion beams. Experiments at higher temperatures will explore equation of state and heavy ion fusion beam-to-target energy coupling efficiency. Ion beams allow precise control of local beam energy deposition providing uniform volumetric heating on a timescale shorter than that of hydrodynamic expansion. The ALE-AMR code does not have any export control restrictions and is currently running at the National Energy Research Scientific Computing Center (NERSC) at LBNL and has been shown to scale well to thousands of CPUs. New surface tension models that are being implemented and applied to WDM experiments. Some of the approaches use a diffuse interface surface tension model that is based on the advective Cahn-Hilliard equations, which allows for droplet breakup in divergent velocity fields without the need for imposed perturbations. Other methods require seeding or other methods for droplet breakup. We also briefly discuss the effects of the move to exascale computing and related

  2. Optimal block cosine transform image coding for noisy channels

    NASA Technical Reports Server (NTRS)

    Vaishampayan, Vinay A.; Farvardin, Nariman

    1990-01-01

    The two dimensional block transform coding scheme based on the discrete cosine transform was studied extensively for image coding applications. While this scheme has proven to be efficient in the absence of channel errors, its performance degrades rapidly over noisy channels. A method is presented for the joint source channel coding optimiaation of a scheme based on the 2-D block cosine transorm when the output of the encoder is to be transmitted via a memoryless design of the quantizers used for encoding the transform coefficients. This algorithm produces a set of locally optimum quantizers and the corresponding binary code assignment for the assumed transform coefficient statistics. To determine the optimum bit assignment among the transform coefficients, an algorithm was used based on the steepest descent method, which under certain convexity conditions on the performance of the channel optimized quantizers, yields the optimal bit allocation. Comprehensive simulation results for the performance of this locally optimum system over noise channels were obtained and appropriate comparisons against a reference system designed for no channel error were rendered.

  3. Optimal block cosine transform image coding for noisy channels

    NASA Technical Reports Server (NTRS)

    Vaishampayan, Vinay A.; Farvardin, Nariman

    1990-01-01

    The two dimensional block transform coding scheme based on the discrete cosine transform was studied extensively for image coding applications. While this scheme has proven to be efficient in the absence of channel errors, its performance degrades rapidly over noisy channels. A method is presented for the joint source channel coding optimiaation of a scheme based on the 2-D block cosine transorm when the output of the encoder is to be transmitted via a memoryless design of the quantizers used for encoding the transform coefficients. This algorithm produces a set of locally optimum quantizers and the corresponding binary code assignment for the assumed transform coefficient statistics. To determine the optimum bit assignment among the transform coefficients, an algorithm was used based on the steepest descent method, which under certain convexity conditions on the performance of the channel optimized quantizers, yields the optimal bit allocation. Comprehensive simulation results for the performance of this locally optimum system over noise channels were obtained and appropriate comparisons against a reference system designed for no channel error were rendered.

  4. Performance optimization of spectral amplitude coding OCDMA system using new enhanced multi diagonal code

    NASA Astrophysics Data System (ADS)

    Imtiaz, Waqas A.; Ilyas, M.; Khan, Yousaf

    2016-11-01

    This paper propose a new code to optimize the performance of spectral amplitude coding-optical code division multiple access (SAC-OCDMA) system. The unique two-matrix structure of the proposed enhanced multi diagonal (EMD) code and effective correlation properties, between intended and interfering subscribers, significantly elevates the performance of SAC-OCDMA system by negating multiple access interference (MAI) and associated phase induce intensity noise (PIIN). Performance of SAC-OCDMA system based on the proposed code is thoroughly analyzed for two detection techniques through analytic and simulation analysis by referring to bit error rate (BER), signal to noise ratio (SNR) and eye patterns at the receiving end. It is shown that EMD code while using SDD technique provides high transmission capacity, reduces the receiver complexity, and provides better performance as compared to complementary subtraction detection (CSD) technique. Furthermore, analysis shows that, for a minimum acceptable BER of 10-9 , the proposed system supports 64 subscribers at data rates of up to 2 Gbps for both up-down link transmission.

  5. Optimal bounds for parity-oblivious random access codes

    NASA Astrophysics Data System (ADS)

    Chailloux, André; Kerenidis, Iordanis; Kundu, Srijita; Sikora, Jamie

    2016-04-01

    Random access coding is an information task that has been extensively studied and found many applications in quantum information. In this scenario, Alice receives an n-bit string x, and wishes to encode x into a quantum state {ρ }x, such that Bob, when receiving the state {ρ }x, can choose any bit i\\in [n] and recover the input bit x i with high probability. Here we study two variants: parity-oblivious random access codes (RACs), where we impose the cryptographic property that Bob cannot infer any information about the parity of any subset of bits of the input apart from the single bits x i ; and even-parity-oblivious RACs, where Bob cannot infer any information about the parity of any even-size subset of bits of the input. In this paper, we provide the optimal bounds for parity-oblivious quantum RACs and show that they are asymptotically better than the optimal classical ones. Our results provide a large non-contextuality inequality violation and resolve the main open problem in a work of Spekkens et al (2009 Phys. Rev. Lett. 102 010401). Second, we provide the optimal bounds for even-parity-oblivious RACs by proving their equivalence to a non-local game and by providing tight bounds for the success probability of the non-local game via semidefinite programming. In the case of even-parity-oblivious RACs, the cryptographic property holds also in the device independent model.

  6. A simple model of optimal population coding for sensory systems.

    PubMed

    Doi, Eizaburo; Lewicki, Michael S

    2014-08-01

    A fundamental task of a sensory system is to infer information about the environment. It has long been suggested that an important goal of the first stage of this process is to encode the raw sensory signal efficiently by reducing its redundancy in the neural representation. Some redundancy, however, would be expected because it can provide robustness to noise inherent in the system. Encoding the raw sensory signal itself is also problematic, because it contains distortion and noise. The optimal solution would be constrained further by limited biological resources. Here, we analyze a simple theoretical model that incorporates these key aspects of sensory coding, and apply it to conditions in the retina. The model specifies the optimal way to incorporate redundancy in a population of noisy neurons, while also optimally compensating for sensory distortion and noise. Importantly, it allows an arbitrary input-to-output cell ratio between sensory units (photoreceptors) and encoding units (retinal ganglion cells), providing predictions of retinal codes at different eccentricities. Compared to earlier models based on redundancy reduction, the proposed model conveys more information about the original signal. Interestingly, redundancy reduction can be near-optimal when the number of encoding units is limited, such as in the peripheral retina. We show that there exist multiple, equally-optimal solutions whose receptive field structure and organization vary significantly. Among these, the one which maximizes the spatial locality of the computation, but not the sparsity of either synaptic weights or neural responses, is consistent with known basic properties of retinal receptive fields. The model further predicts that receptive field structure changes less with light adaptation at higher input-to-output cell ratios, such as in the periphery.

  7. Efficient sensory cortical coding optimizes pursuit eye movements

    PubMed Central

    Liu, Bing; Macellaio, Matthew V.; Osborne, Leslie C.

    2016-01-01

    In the natural world, the statistics of sensory stimuli fluctuate across a wide range. In theory, the brain could maximize information recovery if sensory neurons adaptively rescale their sensitivity to the current range of inputs. Such adaptive coding has been observed in a variety of systems, but the premise that adaptation optimizes behaviour has not been tested. Here we show that adaptation in cortical sensory neurons maximizes information about visual motion in pursuit eye movements guided by that cortical activity. We find that gain adaptation drives a rapid (<100 ms) recovery of information after shifts in motion variance, because the neurons and behaviour rescale their sensitivity to motion fluctuations. Both neurons and pursuit rapidly adopt a response gain that maximizes motion information and minimizes tracking errors. Thus, efficient sensory coding is not simply an ideal standard but a description of real sensory computation that manifests in improved behavioural performance. PMID:27611214

  8. Investigation of Navier-Stokes Code Verification and Design Optimization

    NASA Technical Reports Server (NTRS)

    Vaidyanathan, Rajkumar

    2004-01-01

    With rapid progress made in employing computational techniques for various complex Navier-Stokes fluid flow problems, design optimization problems traditionally based on empirical formulations and experiments are now being addressed with the aid of computational fluid dynamics (CFD). To be able to carry out an effective CFD-based optimization study, it is essential that the uncertainty and appropriate confidence limits of the CFD solutions be quantified over the chosen design space. The present dissertation investigates the issues related to code verification, surrogate model-based optimization and sensitivity evaluation. For Navier-Stokes (NS) CFD code verification a least square extrapolation (LSE) method is assessed. This method projects numerically computed NS solutions from multiple, coarser base grids onto a freer grid and improves solution accuracy by minimizing the residual of the discretized NS equations over the projected grid. In this dissertation, the finite volume (FV) formulation is focused on. The interplay between the xi concepts and the outcome of LSE, and the effects of solution gradients and singularities, nonlinear physics, and coupling of flow variables on the effectiveness of LSE are investigated. A CFD-based design optimization of a single element liquid rocket injector is conducted with surrogate models developed using response surface methodology (RSM) based on CFD solutions. The computational model consists of the NS equations, finite rate chemistry, and the k-6 turbulence closure. With the aid of these surrogate models, sensitivity and trade-off analyses are carried out for the injector design whose geometry (hydrogen flow angle, hydrogen and oxygen flow areas and oxygen post tip thickness) is optimized to attain desirable goals in performance (combustion length) and life/survivability (the maximum temperatures on the oxidizer post tip and injector face and a combustion chamber wall temperature). A preliminary multi-objective optimization

  9. The microstructures of cold dense systems as informed by hard sphere models and optimal packings

    NASA Astrophysics Data System (ADS)

    Hopkins, Adam Bayne

    Sphere packings, or arrangements of "billiard balls" of various sizes that never overlap, are especially informative and broadly applicable models. In particular, a hard sphere model describes the important foundational case where potential energy due to attractive and repulsive forces is not present, meaning that entropy dominates the system's free energy. Sphere packings have been widely employed in chemistry, materials science, physics and biology to model a vast range of materials including concrete, rocket fuel, proteins, liquids and solid metals, to name but a few. Despite their richness and broad applicability, many questions about fundamental sphere packings remain unanswered. For example, what are the densest packings of identical three-dimensional spheres within certain defined containers? What are the densest packings of binary spheres (spheres of two different sizes) in three-dimensional Euclidean space R3 ? The answers to these two questions are important in condensed matter physics and solid-state chemistry. The former is important to the theory of nucleation in supercooled liquids and the latter in terms of studying the structure and stability of atomic and molecular alloys. The answers to both questions are useful when studying the targeted self-assembly of colloidal nanostructures. In this dissertation, putatively optimal answers to both of these questions are provided, and the applications of these findings are discussed. The methods developed to provide these answers, novel algorithms combining sequential linear and nonlinear programming techniques with targeted stochastic searches of conguration space, are also discussed. In addition, connections between the realizability of pair correlation functions and optimal sphere packings are studied, and mathematical proofs are presented concerning the characteristics of both locally and globally maximally dense structures in arbitrary dimension d. Finally, surprising and unexpected findings are

  10. Recent developments in DYNSUB: New models, code optimization and parallelization

    SciTech Connect

    Daeubler, M.; Trost, N.; Jimenez, J.; Sanchez, V.

    2013-07-01

    DYNSUB is a high-fidelity coupled code system consisting of the reactor simulator DYN3D and the sub-channel code SUBCHANFLOW. It describes nuclear reactor core behavior with pin-by-pin resolution for both steady-state and transient scenarios. In the course of the coupled code system's active development, super-homogenization (SPH) and generalized equivalence theory (GET) discontinuity factors may be computed with and employed in DYNSUB to compensate pin level homogenization errors. Because of the largely increased numerical problem size for pin-by-pin simulations, DYNSUB has bene fitted from HPC techniques to improve its numerical performance. DYNSUB's coupling scheme has been structurally revised. Computational bottlenecks have been identified and parallelized for shared memory systems using OpenMP. Comparing the elapsed time for simulating a PWR core with one-eighth symmetry under hot zero power conditions applying the original and the optimized DYNSUB using 8 cores, overall speed up factors greater than 10 have been observed. The corresponding reduction in execution time enables a routine application of DYNSUB to study pin level safety parameters for engineering sized cases in a scientific environment. (authors)

  11. Audio coding based on rate distortion and perceptual optimization

    NASA Astrophysics Data System (ADS)

    Erne, Markus; Moschytz, George

    2000-04-01

    The time-frequency tiling, bit allocation and the quantizer of most perceptual coding algorithms is either fixed or controlled by a perceptual mode. The large variety of existing audio signals, each exhibiting different coding requirements due to their different temporal and spectral fine-structure suggests to use a signal-adaptive algorithm. The framework which is described in this is paper makes use of a signal-adaptive wavelet filterbank which allows to switch any node of the wavelet-packet tree individually. Therefore each subband can have an individual time- segmentation and the overall time-frequency tiling can be adapted to the signal using optimization techniques. A rate- distortion optimality can be defined which will minimize the distortion for a given rate in every subband, based on a perceptual model. Due to the additivity of the rate and distortion measure over disjoint covers of the input signal, an overall cost function including the switching cost for the filterbank switching can be defined. By the use of dynamic programming techniques, the wavelet-packet tree can be pruned base don a top-down or bottom-up 'split-merge' decision in every node of the wavelet-tree. Additionally we can profit form temporal masking due to the fact that each subband can have an individual segmentation in time without introducing time domain artifacts such as pre-echo distortion.

  12. Iterative Phase Optimization of Elementary Quantum Error Correcting Codes

    NASA Astrophysics Data System (ADS)

    Müller, M.; Rivas, A.; Martínez, E. A.; Nigg, D.; Schindler, P.; Monz, T.; Blatt, R.; Martin-Delgado, M. A.

    2016-07-01

    Performing experiments on small-scale quantum computers is certainly a challenging endeavor. Many parameters need to be optimized to achieve high-fidelity operations. This can be done efficiently for operations acting on single qubits, as errors can be fully characterized. For multiqubit operations, though, this is no longer the case, as in the most general case, analyzing the effect of the operation on the system requires a full state tomography for which resources scale exponentially with the system size. Furthermore, in recent experiments, additional electronic levels beyond the two-level system encoding the qubit have been used to enhance the capabilities of quantum-information processors, which additionally increases the number of parameters that need to be controlled. For the optimization of the experimental system for a given task (e.g., a quantum algorithm), one has to find a satisfactory error model and also efficient observables to estimate the parameters of the model. In this manuscript, we demonstrate a method to optimize the encoding procedure for a small quantum error correction code in the presence of unknown but constant phase shifts. The method, which we implement here on a small-scale linear ion-trap quantum computer, is readily applicable to other AMO platforms for quantum-information processing.

  13. Iterative optimal subcritical aerodynamic design code including profile drag

    NASA Technical Reports Server (NTRS)

    Kuhlman, J. M.

    1983-01-01

    A subcritical aerodynamic design computer code has been developed, which uses linearized aerodynamics along with sweep theory and airfoil data to obtain minimum total drag preliminary designs for multiple planform configurations. These optimum designs consist of incidence distributions yielding minimum total drag at design values of Mach number and lift and pitching moment coefficients. Linear lofting is used between airfoil stations. Solutions for isolated transport wings have shown that the solution is unique, and that including profile drag effects decreases tip loading and incidence relative to values obtained for minimum induced drag solutions. Further, including effects of variation of profile drag with Reynolds number can cause appreciable changes in the optimal design for tapered wings. Example solutions are also discussed for multiple planform configurations.

  14. Iterative optimal subcritical aerodynamic design code including profile drag

    NASA Technical Reports Server (NTRS)

    Kuhlman, J. M.

    1983-01-01

    A subcritical aerodynamic design computer code has been developed, which uses linearized aerodynamics along with sweep theory and airfoil data to obtain minimum total drag preliminary designs for multiple planform configurations. These optimum designs consist of incidence distributions yielding minimum total drag at design values of Mach number and lift and pitching moment coefficients. Linear lofting is used between airfoil stations. Solutions for isolated transport wings have shown that the solution is unique, and that including profile drag effects decreases tip loading and incidence relative to values obtained for minimum induced drag solutions. Further, including effects of variation of profile drag with Reynolds number can cause appreciable changes in the optimal design for tapered wings. Example solutions are also discussed for multiple planform configurations.

  15. Optimization of Coded Aperture Radioscintigraphy for Sentinel Lymph Node Mapping

    PubMed Central

    Fujii, Hirofumi; Idoine, John D.; Gioux, Sylvain; Accorsi, Roberto; Slochower, David R.; Lanza, Richard C.; Frangioni, John V.

    2011-01-01

    Purpose Radioscintigraphic imaging during sentinel lymph node (SLN) mapping could potentially improve localization; however, parallel-hole collimators have certain limitations. In this study, we explored the use of coded aperture (CA) collimators. Procedures Equations were derived for the six major dependent variables of CA collimators (i.e., masks) as a function of the ten major independent variables, and an optimized mask was fabricated. After validation, dual-modality CA and near-infrared (NIR) fluorescence SLN mapping was performed in pigs. Results Mask optimization required the judicious balance of competing dependent variables, resulting in sensitivity of 0.35%, XY resolution of 2.0 mm, and Z resolution of 4.2 mm at an 11.5 cm FOV. Findings in pigs suggested that NIR fluorescence imaging and CA radioscintigraphy could be complementary, but present difficult technical challenges. Conclusions This study lays the foundation for using CA collimation for SLN mapping, and also exposes several problems that require further investigation. PMID:21567254

  16. Optimization of Power Systems Using Real Coded Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Deep, Kusum

    2008-10-01

    This talk highlights the recently proposed Real Coded Crossover Operator, called the Laplace Crossover (LX) of [1] and Real Coded Mutation Operator called Power Mutation (PM) of [2], wherein The performance of LX and PM is compared with Heuristic Crossover (HX) and Non-Uniform Mutation (NUM) and Makinen, Periaux and Toivanen Mutation (MPTM). The test bed is a set of 20 test problems available in global optimization literature. Various performance criterion like computational cost, success rate, solution quality, efficiency and reliability are reported using two kinds of analysis. The results show that LX-PM outperforms all other GAs considered. In this paper, the above algorithms are extended for obtaining global optimal solution of constrained optimization problems. Constraints are handled using the parameter less approach proposed by Deb and the six RCGAs described above are modified accordingly. Comparison is shown with other existing RCGAs using Simulated Binary Crossover (SBX) and Polynomial Mutation (POL) of [3], [4]. Inclusion of two operators, SBX and POL, gives rise to two more combinations namely, LX with POL and SBX with PM. Two new RCGAs namely, LX-POL and SBX-PM are proposed by taking these two operators into account. Thus, in all, nine RCGAs are used for comparative study, namely: LX-POL, LX-PM, LX-MPTM, LX-NUM, HX-PM, HX-MPTM, HX-NUM, SBX-POL and SBX-PM. A set of 25 benchmark test problems are chosen, consisting of linear/nonlinear objective function and equality/inequality constraint. Comparison is made with respect to percentage of success, the average number of function evaluations and execution of successful runs. It is observed that the overall success rate of LX-POL is better than all other RCGAs. Based on extensive analysis, it is concluded that LX-POL clearly outperform other RCGAs considered in this study. The problem of optimization of Directional Over current Relay is modeled as a nonlinear constrained optimization problem. It is required

  17. A novel neutron energy spectrum unfolding code using particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Shahabinejad, H.; Sohrabpour, M.

    2017-07-01

    A novel neutron Spectrum Deconvolution using Particle Swarm Optimization (SDPSO) code has been developed to unfold the neutron spectrum from a pulse height distribution and a response matrix. The Particle Swarm Optimization (PSO) imitates the bird flocks social behavior to solve complex optimization problems. The results of the SDPSO code have been compared with those of the standard spectra and recently published Two-steps Genetic Algorithm Spectrum Unfolding (TGASU) code. The TGASU code have been previously compared with the other codes such as MAXED, GRAVEL, FERDOR and GAMCD and shown to be more accurate than the previous codes. The results of the SDPSO code have been demonstrated to match well with those of the TGASU code for both under determined and over-determined problems. In addition the SDPSO has been shown to be nearly two times faster than the TGASU code.

  18. The optimation of random network coding in wireless MESH networks

    NASA Astrophysics Data System (ADS)

    Pang, Chunjiang; Pan, Xikun

    2013-03-01

    In order to improve the efficiency of wireless mesh network transmission, this paper focused on the network coding technology. Using network coding can significantly increase the wireless mesh network's throughput, but it will inevitably increase the computational complexity to the network, and the traditional linear network coding algorithm requires the aware of the whole network topology, which is impossible in the ever-changing topology of wireless mesh networks. In this paper, we use a distributed network coding strategy: random network coding, which don't need to know the whole topology of the network. In order to decrease the computation complexity, this paper suggests an improved strategy for random network coding: Do not code the packets which bring no good to the whole transmission. In this paper, we list several situations which coding is not necessary. Simulation results show that applying these strategies can improve the efficiency of wireless mesh network transmission.

  19. Optimal spike-based communication in excitable networks with strong-sparse and weak-dense links.

    PubMed

    Teramae, Jun-nosuke; Tsubo, Yasuhiro; Fukai, Tomoki

    2012-01-01

    The connectivity of complex networks and functional implications has been attracting much interest in many physical, biological and social systems. However, the significance of the weight distributions of network links remains largely unknown except for uniformly- or Gaussian-weighted links. Here, we show analytically and numerically, that recurrent neural networks can robustly generate internal noise optimal for spike transmission between neurons with the help of a long-tailed distribution in the weights of recurrent connections. The structure of spontaneous activity in such networks involves weak-dense connections that redistribute excitatory activity over the network as noise sources to optimally enhance the responses of individual neurons to input at sparse-strong connections, thus opening multiple signal transmission pathways. Electrophysiological experiments confirm the importance of a highly broad connectivity spectrum supported by the model. Our results identify a simple network mechanism for internal noise generation by highly inhomogeneous connection strengths supporting both stability and optimal communication.

  20. Optimal spike-based communication in excitable networks with strong-sparse and weak-dense links

    NASA Astrophysics Data System (ADS)

    Teramae, Jun-Nosuke; Tsubo, Yasuhiro; Fukai, Tomoki

    2012-07-01

    The connectivity of complex networks and functional implications has been attracting much interest in many physical, biological and social systems. However, the significance of the weight distributions of network links remains largely unknown except for uniformly- or Gaussian-weighted links. Here, we show analytically and numerically, that recurrent neural networks can robustly generate internal noise optimal for spike transmission between neurons with the help of a long-tailed distribution in the weights of recurrent connections. The structure of spontaneous activity in such networks involves weak-dense connections that redistribute excitatory activity over the network as noise sources to optimally enhance the responses of individual neurons to input at sparse-strong connections, thus opening multiple signal transmission pathways. Electrophysiological experiments confirm the importance of a highly broad connectivity spectrum supported by the model. Our results identify a simple network mechanism for internal noise generation by highly inhomogeneous connection strengths supporting both stability and optimal communication.

  1. Image-Guided Non-Local Dense Matching with Three-Steps Optimization

    NASA Astrophysics Data System (ADS)

    Huang, Xu; Zhang, Yongjun; Yue, Zhaoxi

    2016-06-01

    This paper introduces a new image-guided non-local dense matching algorithm that focuses on how to solve the following problems: 1) mitigating the influence of vertical parallax to the cost computation in stereo pairs; 2) guaranteeing the performance of dense matching in homogeneous intensity regions with significant disparity changes; 3) limiting the inaccurate cost propagated from depth discontinuity regions; 4) guaranteeing that the path between two pixels in the same region is connected; and 5) defining the cost propagation function between the reliable pixel and the unreliable pixel during disparity interpolation. This paper combines the Census histogram and an improved histogram of oriented gradient (HOG) feature together as the cost metrics, which are then aggregated based on a new iterative non-local matching method and the semi-global matching method. Finally, new rules of cost propagation between the valid pixels and the invalid pixels are defined to improve the disparity interpolation results. The results of our experiments using the benchmarks and the Toronto aerial images from the International Society for Photogrammetry and Remote Sensing (ISPRS) show that the proposed new method can outperform most of the current state-of-the-art stereo dense matching methods.

  2. Hydrodynamic Optimization Method and Design Code for Stall-Regulated Hydrokinetic Turbine Rotors

    SciTech Connect

    Sale, D.; Jonkman, J.; Musial, W.

    2009-08-01

    This report describes the adaptation of a wind turbine performance code for use in the development of a general use design code and optimization method for stall-regulated horizontal-axis hydrokinetic turbine rotors. This rotor optimization code couples a modern genetic algorithm and blade-element momentum performance code in a user-friendly graphical user interface (GUI) that allows for rapid and intuitive design of optimal stall-regulated rotors. This optimization method calculates the optimal chord, twist, and hydrofoil distributions which maximize the hydrodynamic efficiency and ensure that the rotor produces an ideal power curve and avoids cavitation. Optimizing a rotor for maximum efficiency does not necessarily create a turbine with the lowest cost of energy, but maximizing the efficiency is an excellent criterion to use as a first pass in the design process. To test the capabilities of this optimization method, two conceptual rotors were designed which successfully met the design objectives.

  3. SSE-based Thomas algorithm for quasi-block-tridiagonal linear equation systems, optimized for small dense blocks

    NASA Astrophysics Data System (ADS)

    Barnaś, Dawid; Bieniasz, Lesław K.

    2017-07-01

    We have recently developed a vectorized Thomas solver for quasi-block tridiagonal linear algebraic equation systems using Streaming SIMD Extensions (SSE) and Advanced Vector Extensions (AVX) in operations on dense blocks [D. Barnaś and L. K. Bieniasz, Int. J. Comput. Meth., accepted]. The acceleration caused by vectorization was observed for large block sizes, but was less satisfactory for small blocks. In this communication we report on another version of the solver, optimized for small blocks of size up to four rows and/or columns.

  4. Optimal Near-Hitless Network Failure Recovery Using Diversity Coding

    ERIC Educational Resources Information Center

    Avci, Serhat Nazim

    2013-01-01

    Link failures in wide area networks are common and cause significant data losses. Mesh-based protection schemes offer high capacity efficiency but they are slow, require complex signaling, and instable. Diversity coding is a proactive coding-based recovery technique which offers near-hitless (sub-ms) restoration with a competitive spare capacity…

  5. Optimal Near-Hitless Network Failure Recovery Using Diversity Coding

    ERIC Educational Resources Information Center

    Avci, Serhat Nazim

    2013-01-01

    Link failures in wide area networks are common and cause significant data losses. Mesh-based protection schemes offer high capacity efficiency but they are slow, require complex signaling, and instable. Diversity coding is a proactive coding-based recovery technique which offers near-hitless (sub-ms) restoration with a competitive spare capacity…

  6. Optimality of the genetic code with respect to protein stability and amino-acid frequencies

    PubMed Central

    Gilis, Dimitri; Massar, Serge; Cerf, Nicolas J; Rooman, Marianne

    2001-01-01

    Background The genetic code is known to be efficient in limiting the effect of mistranslation errors. A misread codon often codes for the same amino acid or one with similar biochemical properties, so the structure and function of the coded protein remain relatively unaltered. Previous studies have attempted to address this question quantitatively, by estimating the fraction of randomly generated codes that do better than the genetic code in respect of overall robustness. We extended these results by investigating the role of amino-acid frequencies in the optimality of the genetic code. Results We found that taking the amino-acid frequency into account decreases the fraction of random codes that beat the natural code. This effect is particularly pronounced when more refined measures of the amino-acid substitution cost are used than hydrophobicity. To show this, we devised a new cost function by evaluating in silico the change in folding free energy caused by all possible point mutations in a set of protein structures. With this function, which measures protein stability while being unrelated to the code's structure, we estimated that around two random codes in a billion (109) are fitter than the natural code. When alternative codes are restricted to those that interchange biosynthetically related amino acids, the genetic code appears even more optimal. Conclusions These results lead us to discuss the role of amino-acid frequencies and other parameters in the genetic code's evolution, in an attempt to propose a tentative picture of primitive life. PMID:11737948

  7. Efficacy of Code Optimization on Cache-based Processors

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    The current common wisdom in the U.S. is that the powerful, cost-effective supercomputers of tomorrow will be based on commodity (RISC) micro-processors with cache memories. Already, most distributed systems in the world use such hardware as building blocks. This shift away from vector supercomputers and towards cache-based systems has brought about a change in programming paradigm, even when ignoring issues of parallelism. Vector machines require inner-loop independence and regular, non-pathological memory strides (usually this means: non-power-of-two strides) to allow efficient vectorization of array operations. Cache-based systems require spatial and temporal locality of data, so that data once read from main memory and stored in high-speed cache memory is used optimally before being written back to main memory. This means that the most cache-friendly array operations are those that feature zero or unit stride, so that each unit of data read from main memory (a cache line) contains information for the next iteration in the loop. Moreover, loops ought to be 'fat', meaning that as many operations as possible are performed on cache data-provided instruction caches do not overflow and enough registers are available. If unit stride is not possible, for example because of some data dependency, then care must be taken to avoid pathological strides, just ads on vector computers. For cache-based systems the issues are more complex, due to the effects of associativity and of non-unit block (cache line) size. But there is more to the story. Most modern micro-processors are superscalar, which means that they can issue several (arithmetic) instructions per clock cycle, provided that there are enough independent instructions in the loop body. This is another argument for providing fat loop bodies. With these restrictions, it appears fairly straightforward to produce code that will run efficiently on any cache-based system. It can be argued that although some of the important

  8. Extreme genetic code optimality from a molecular dynamics calculation of amino acid polar requirement

    NASA Astrophysics Data System (ADS)

    Butler, Thomas; Goldenfeld, Nigel; Mathew, Damien; Luthey-Schulten, Zaida

    2009-06-01

    A molecular dynamics calculation of the amino acid polar requirement is used to score the canonical genetic code. Monte Carlo simulation shows that this computational polar requirement has been optimized by the canonical genetic code, an order of magnitude more than any previously known measure, effectively ruling out a vertical evolution dynamics. The sensitivity of the optimization to the precise metric used in code scoring is consistent with code evolution having proceeded through the communal dynamics of statistical proteins using horizontal gene transfer, as recently proposed. The extreme optimization of the genetic code therefore strongly supports the idea that the genetic code evolved from a communal state of life prior to the last universal common ancestor.

  9. Parameter optimization capability in the trajectory code PMAST (Point-Mass Simulation Tool)

    SciTech Connect

    Outka, D.E.

    1987-01-28

    Trajectory optimization capability has been added to PMAST through addition of the Recursive Quadratic Programming code VF02AD. The scope of trajectory optimization problems the resulting code can solve is very broad, as it takes advantage of the versatility of the original PMAST code. Most three-degree-of-freedom flight-vehicle problems can be simulated with PMAST, and up to 25 parameters specifying initial conditions, weights, control histories and other problem-deck inputs can be used to meet trajectory constraints in some optimal manner. This report outlines the mathematical formulation of the optimization technique, describes the input requirements and suggests guidelines for problem formulation. An example problem is presented to demonstrate the use and features of the optimization portions of the code.

  10. GPU Optimizations for a Production Molecular Docking Code.

    PubMed

    Landaverde, Raphael; Herbordt, Martin C

    2014-09-01

    Modeling molecular docking is critical to both understanding life processes and designing new drugs. In previous work we created the first published GPU-accelerated docking code (PIPER) which achieved a roughly 5× speed-up over a contemporaneous 4 core CPU. Advances in GPU architecture and in the CPU code, however, have since reduced this relalative performance by a factor of 10. In this paper we describe the upgrade of GPU PIPER. This required an entire rewrite, including algorithm changes and moving most remaining non-accelerated CPU code onto the GPU. The result is a 7× improvement in GPU performance and a 3.3× speedup over the CPU-only code. We find that this difference in time is almost entirely due to the difference in run times of the 3D FFT library functions on CPU (MKL) and GPU (cuFFT), respectively. The GPU code has been integrated into the ClusPro docking server which has over 4000 active users.

  11. GPU Optimizations for a Production Molecular Docking Code*

    PubMed Central

    Landaverde, Raphael; Herbordt, Martin C.

    2015-01-01

    Modeling molecular docking is critical to both understanding life processes and designing new drugs. In previous work we created the first published GPU-accelerated docking code (PIPER) which achieved a roughly 5× speed-up over a contemporaneous 4 core CPU. Advances in GPU architecture and in the CPU code, however, have since reduced this relalative performance by a factor of 10. In this paper we describe the upgrade of GPU PIPER. This required an entire rewrite, including algorithm changes and moving most remaining non-accelerated CPU code onto the GPU. The result is a 7× improvement in GPU performance and a 3.3× speedup over the CPU-only code. We find that this difference in time is almost entirely due to the difference in run times of the 3D FFT library functions on CPU (MKL) and GPU (cuFFT), respectively. The GPU code has been integrated into the ClusPro docking server which has over 4000 active users. PMID:26594667

  12. Simulated evolution applied to study the genetic code optimality using a model of codon reassignments

    PubMed Central

    2011-01-01

    Background As the canonical code is not universal, different theories about its origin and organization have appeared. The optimization or level of adaptation of the canonical genetic code was measured taking into account the harmful consequences resulting from point mutations leading to the replacement of one amino acid for another. There are two basic theories to measure the level of optimization: the statistical approach, which compares the canonical genetic code with many randomly generated alternative ones, and the engineering approach, which compares the canonical code with the best possible alternative. Results Here we used a genetic algorithm to search for better adapted hypothetical codes and as a method to guess the difficulty in finding such alternative codes, allowing to clearly situate the canonical code in the fitness landscape. This novel proposal of the use of evolutionary computing provides a new perspective in the open debate between the use of the statistical approach, which postulates that the genetic code conserves amino acid properties far better than expected from a random code, and the engineering approach, which tends to indicate that the canonical genetic code is still far from optimal. We used two models of hypothetical codes: one that reflects the known examples of codon reassignment and the model most used in the two approaches which reflects the current genetic code translation table. Although the standard code is far from a possible optimum considering both models, when the more realistic model of the codon reassignments was used, the evolutionary algorithm had more difficulty to overcome the efficiency of the canonical genetic code. Conclusions Simulated evolution clearly reveals that the canonical genetic code is far from optimal regarding its optimization. Nevertheless, the efficiency of the canonical code increases when mistranslations are taken into account with the two models, as indicated by the fact that the best possible

  13. Design of zero reference codes by means of a global optimization method.

    PubMed

    Saez-Landete, José; Alonso, José; Bernabeu, Eusebio

    2005-01-10

    The grating measurement systems can be used for displacement and angle measurements. They require of zero reference codes to obtain zero reference signals and absolute measures. The zero reference signals are obtained from the autocorrelation of two identical zero reference codes. The design of codes which generate optimum signals is rather complex, especially for larges codes. In this paper we present a global optimization method, a DIRECT algorithm for the design of zero reference codes. This method proves to be a powerful tool for solving this inverse problem.

  14. Design of zero reference codes by means of a global optimization method

    NASA Astrophysics Data System (ADS)

    Saez Landete, José; Alonso, José; Bernabeu, Eusebio

    2005-01-01

    The grating measurement systems can be used for displacement and angle measurements. They require of zero reference codes to obtain zero reference signals and absolute measures. The zero reference signals are obtained from the autocorrelation of two identical zero reference codes. The design of codes which generate optimum signals is rather complex, especially for larges codes. In this paper we present a global optimization method, a DIRECT algorithm for the design of zero reference codes. This method proves to be a powerful tool for solving this inverse problem.

  15. Quantum circuit for optimal eavesdropping in quantum key distribution using phase-time coding

    SciTech Connect

    Kronberg, D. A.; Molotkov, S. N.

    2010-07-15

    A quantum circuit is constructed for optimal eavesdropping on quantum key distribution proto- cols using phase-time coding, and its physical implementation based on linear and nonlinear fiber-optic components is proposed.

  16. Optimal Base Station Density of Dense Network: From the Viewpoint of Interference and Load

    PubMed Central

    Feng, Zhiyong

    2017-01-01

    Network densification is attracting increasing attention recently due to its ability to improve network capacity by spatial reuse and relieve congestion by offloading. However, excessive densification and aggressive offloading can also cause the degradation of network performance due to problems of interference and load. In this paper, with consideration of load issues, we study the optimal base station density that maximizes the throughput of the network. The expected link rate and the utilization ratio of the contention-based channel are derived as the functions of base station density using the Poisson Point Process (PPP) and Markov Chain. They reveal the rules of deployment. Based on these results, we obtain the throughput of the network and indicate the optimal deployment density under different network conditions. Extensive simulations are conducted to validate our analysis and show the substantial performance gain obtained by the proposed deployment scheme. These results can provide guidance for the network densification. PMID:28891997

  17. Wing design code using three-dimensional Euler equations and optimization

    NASA Technical Reports Server (NTRS)

    Chang, I-Chung; Torres, Francisco J.; Van Dam, C. P.

    1991-01-01

    This paper describes a new wing design code which is based on the Euler equations and a constrained numerical optimization technique. The geometry modification is based on a set of fundamental modes define on the unit interval. A design example involving a high speed civil transport wing is presented to demonstrate the usefulness of the design code. It is shown that the use of an Euler solver in the direct numerical optimization procedures is affordable on the current generation of supercomputers.

  18. Evolution of the genetic code: partial optimization of a random code for robustness to translation error in a rugged fitness landscape

    PubMed Central

    Novozhilov, Artem S; Wolf, Yuri I; Koonin, Eugene V

    2007-01-01

    Background The standard genetic code table has a distinctly non-random structure, with similar amino acids often encoded by codons series that differ by a single nucleotide substitution, typically, in the third or the first position of the codon. It has been repeatedly argued that this structure of the code results from selective optimization for robustness to translation errors such that translational misreading has the minimal adverse effect. Indeed, it has been shown in several studies that the standard code is more robust than a substantial majority of random codes. However, it remains unclear how much evolution the standard code underwent, what is the level of optimization, and what is the likely starting point. Results We explored possible evolutionary trajectories of the genetic code within a limited domain of the vast space of possible codes. Only those codes were analyzed for robustness to translation error that possess the same block structure and the same degree of degeneracy as the standard code. This choice of a small part of the vast space of possible codes is based on the notion that the block structure of the standard code is a consequence of the structure of the complex between the cognate tRNA and the codon in mRNA where the third base of the codon plays a minimum role as a specificity determinant. Within this part of the fitness landscape, a simple evolutionary algorithm, with elementary evolutionary steps comprising swaps of four-codon or two-codon series, was employed to investigate the optimization of codes for the maximum attainable robustness. The properties of the standard code were compared to the properties of four sets of codes, namely, purely random codes, random codes that are more robust than the standard code, and two sets of codes that resulted from optimization of the first two sets. The comparison of these sets of codes with the standard code and its locally optimized version showed that, on average, optimization of random codes

  19. Optimization of Ambient Noise Cross-Correlation Imaging Across Large Dense Array

    NASA Astrophysics Data System (ADS)

    Sufri, O.; Xie, Y.; Lin, F. C.; Song, W.

    2015-12-01

    Ambient Noise Tomography is currently one of the most studied topics of seismology. It gives possibility of studying physical properties of rocks from the depths of subsurface to the upper mantle depths using recorded noise sources. A network of new seismic sensors, which are capable of recording continuous seismic noise and doing the processing at the same time on-site, could help to assess possible risk of volcanic activity on a volcano and help to understand the changes in physical properties of a fault before and after an earthquake occurs. This new seismic sensor technology could also be used in oil and gas industry to figure out depletion rate of a reservoir and help to improve velocity models for obtaining better seismic reflection cross-sections. Our recent NSF funded project is bringing seismologists, signal processors, and computer scientists together to develop a new ambient noise seismic imaging system which could record continuous seismic noise and process it on-site and send Green's functions and/or tomography images to the network. Such an imaging system requires optimum amount of sensors, sensor communication, and processing of the recorded data. In order to solve these problems, we first started working on the problem of optimum amount of sensors and the communication between these sensors by using small aperture dense network called Sweetwater Array, deployed by Nodal Seismic in 2014. We downloaded ~17 day of continuous data from 2268 one-component stations between March 30-April 16 2015 from IRIS DMC and performed cross-correlation to determine the lag times between station pairs. The lag times were then entered in matrix form. Our goal is to selecting random lag time values in the matrix and assuming all other elements of the matrix either missing or unknown and performing matrix completion technique to find out how close the results from matrix completion technique would be close to the real calculated values. This would give us better idea

  20. Genome-wide analysis of transcriptional regulators in human HSPCs reveals a densely interconnected network of coding and noncoding genes.

    PubMed

    Beck, Dominik; Thoms, Julie A I; Perera, Dilmi; Schütte, Judith; Unnikrishnan, Ashwin; Knezevic, Kathy; Kinston, Sarah J; Wilson, Nicola K; O'Brien, Tracey A; Göttgens, Berthold; Wong, Jason W H; Pimanda, John E

    2013-10-03

    Genome-wide combinatorial binding patterns for key transcription factors (TFs) have not been reported for primary human hematopoietic stem and progenitor cells (HSPCs), and have constrained analysis of the global architecture of molecular circuits controlling these cells. Here we provide high-resolution genome-wide binding maps for a heptad of key TFs (FLI1, ERG, GATA2, RUNX1, SCL, LYL1, and LMO2) in human CD34(+) HSPCs, together with quantitative RNA and microRNA expression profiles. We catalog binding of TFs at coding genes and microRNA promoters, and report that combinatorial binding of all 7 TFs is favored and associated with differential expression of genes and microRNA in HSPCs. We also uncover a previously unrecognized association between FLI1 and RUNX1 pairing in HSPCs, we establish a correlation between the density of histone modifications that mark active enhancers and the number of overlapping TFs at a peak, we demonstrate bivalent histone marks at promoters of heptad target genes in CD34(+) cells that are poised for later expression, and we identify complex relationships between specific microRNAs and coding genes regulated by the heptad. Taken together, these data reveal the power of integrating multifactor sequencing of chromatin immunoprecipitates with coding and noncoding gene expression to identify regulatory circuits controlling cell identity.

  1. Aircraft Course Optimization Tool Using GPOPS MATLAB Code

    DTIC Science & Technology

    2012-03-01

    preceding paragraph and in reality relies heavily on the pseduospectral portion of GPOPS’ name. More specifically GPOPS uses the Radau Pseudospectral...Software for Solving Multiple-Phase Optimal Control Problems Using hp-Adaptive Pseu- dospectral Methods,” 2011. 9. Gill, P . E., Murray, W., and Saunders, M

  2. On RD optimized progressive image coding using JPEG.

    PubMed

    In, J; Shirani, S; Kossentini, F

    1999-01-01

    Among the many different modes of operations allowed in the current JPEG standard, the sequential and progressive modes are the most widely used. While the sequential JPEG mode yields essentially the same level of compression performance for most encoder implementations, the performance of progressive JPEG depends highly upon the designed encoder structure. This is due to the flexibility the standard leaves open in designing progressive JPEG encoders. In this work, a rate-distortion (RD) optimized JPEG compliant progressive encoder is presented that produces a sequence of scans, ordered in terms of decreasing importance. Our encoder outperforms an optimized sequential JPEG encoder in terms of compression efficiency, substantially at low and high bit rates. Moreover, unlike existing JPEG compliant encoders, our encoder can achieve precise rate/distortion control. Substantially better compression performance and precise rate control, provided by our progressive JPEG compliant encoding algorithm, are two highly desired features currently sought for the emerging JPEG-2000 standard.

  3. Joint optimization of run-length coding, Huffman coding, and quantization table with complete baseline JPEG decoder compatibility.

    PubMed

    Yang, En-hui; Wang, Longji

    2009-01-01

    To maximize rate distortion performance while remaining faithful to the JPEG syntax, the joint optimization of the Huffman tables, quantization step sizes, and DCT indices of a JPEG encoder is investigated. Given Huffman tables and quantization step sizes, an efficient graph-based algorithm is first proposed to find the optimal DCT indices in the form of run-size pairs. Based on this graph-based algorithm, an iterative algorithm is then presented to jointly optimize run-length coding, Huffman coding, and quantization table selection. The proposed iterative algorithm not only results in a compressed bitstream completely compatible with existing JPEG and MPEG decoders, but is also computationally efficient. Furthermore, when tested over standard test images, it achieves the best JPEG compression results, to the extent that its own JPEG compression performance even exceeds the quoted PSNR results of some state-of-the-art wavelet-based image coders such as Shapiro's embedded zerotree wavelet algorithm at the common bit rates under comparison. Both the graph-based algorithm and the iterative algorithm can be applied to application areas such as web image acceleration, digital camera image compression, MPEG frame optimization, and transcoding, etc.

  4. Study of dense helium plasma in the optimal hypernetted chain approximation

    SciTech Connect

    Mueller, H.; Langanke, K. )

    1994-01-01

    We have studied the helium plasma in the hypernetted chain approximation considering both short-ranged internuclear and long-ranged Coulomb interactions. The optimal two-particle wave function has been determined in fourth order; fifth-order corrections have been considered in the calculation of the two-body and three-body correlation functions. The latter has been used to determine the pycnonuclear triple-alpha-fusion rate in the density regime 10[sup 8] g/cm[sup 3][le][rho][le]10[sup 10] g/cm[sup 3], which is of importance for the crust evolution of an accreting old neutron star. The influence of three-particle terms in the many-body wave function on the rate is estimated within an additional variational hypernetted chain calculation. Our results support the idea that the helium liquid undergoes a phase transition to stable [sup 8]Be matter at densities [rho][approx]3[times]10[sup 9] g/cm[sup 3] as the plasma induced screening potential then becomes strong enough to bind the [sup 8]Be ground state.

  5. Adaptive λ estimation in Lagrangian rate-distortion optimization for video coding

    NASA Astrophysics Data System (ADS)

    Chen, Lulin; Garbacea, Ilie

    2006-01-01

    In this paper, adaptive Lagrangian multiplier λ estimation in Larangian R-D optimization for video coding is presented that is based on the ρ-domain linear rate model and distortion model. It yields that λ is a function of rate, distortion and coding input statistics and can be written as λ(R, D, σ2) = β(ln(σ2/D) + δ)D/R + k 0, with β, δ and k 0 as coding constants, σ2 is variance of prediction error input. λ(R, D, σ2) describes its ubiquitous relationship with coding statistics and coding input in hybrid video coding such as H.263, MPEG-2/4 and H.264/AVC. The lambda evaluation is de-coupled with quantization parameters. The proposed lambda estimation enables a fine encoder design and encoder control.

  6. Local Code Generation and Compaction in Optimizing Microcode Compilers

    DTIC Science & Technology

    1982-12-01

    Jeannie for much-needed love and emotional support. 4. Iv -t - *0 0, z Si 6 I- qV Table of Contents 1. Introduction 1.1. Horizontal Microcode 1 1.2...Research in compiler optimization suggests that a large number of register classes tends to make register allocation more difficult [ Kim 79, Leverett...when allocating registers for a micromachine. The microcode register allocation schemes designed by Kim and Tan [ Kim 79] and DeWitt [DeWitt 76] are

  7. Optimal coding of vectorcardiographic sequences using spatial prediction.

    PubMed

    Augustyniak, Piotr

    2007-05-01

    This paper discusses principles, implementation details, and advantages of sequence coding algorithm applied to the compression of vectocardiograms (VCG). The main novelty of the proposed method is the automatic management of distortion distribution controlled by the local signal contents in both technical and medical aspects. As in clinical practice, the VCG loops representing P, QRS, and T waves in the three-dimensional (3-D) space are considered here as three simultaneous sequences of objects. Because of the similarity of neighboring loops, encoding the values of prediction error significantly reduces the data set volume. The residual values are de-correlated with the discrete cosine transform (DCT) and truncated at certain energy threshold. The presented method is based on the irregular temporal distribution of medical data in the signal and takes advantage of variable sampling frequency for automatically detected VCG loops. The features of the proposed algorithm are confirmed by the results of the numerical experiment carried out for a wide range of real records. The average data reduction ratio reaches a value of 8.15 while the percent root-mean-square difference (PRD) distortion ratio for the most important sections of signal does not exceed 1.1%.

  8. Efficacy of Code Optimization on Cache-Based Processors

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Saphir, William C.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    In this paper a number of techniques for improving the cache performance of a representative piece of numerical software is presented. Target machines are popular processors from several vendors: MIPS R5000 (SGI Indy), MIPS R8000 (SGI PowerChallenge), MIPS R10000 (SGI Origin), DEC Alpha EV4 + EV5 (Cray T3D & T3E), IBM RS6000 (SP Wide-node), Intel PentiumPro (Ames' Whitney), Sun UltraSparc (NERSC's NOW). The optimizations all attempt to increase the locality of memory accesses. But they meet with rather varied and often counterintuitive success on the different computing platforms. We conclude that it may be genuinely impossible to obtain portable performance on the current generation of cache-based machines. At the least, it appears that the performance of modern commodity processors cannot be described with parameters defining the cache alone.

  9. The genetic code and its optimization for kinetic energy conservation in polypeptide chains.

    PubMed

    Guilloux, Antonin; Jestin, Jean-Luc

    2012-08-01

    Why is the genetic code the way it is? Concepts from fields as diverse as molecular evolution, classical chemistry, biochemistry and metabolism have been used to define selection pressures most likely to be involved in the shaping of the genetic code. Here minimization of kinetic energy disturbances during protein evolution by mutation allows an optimization of the genetic code to be highlighted. The quadratic forms corresponding to the kinetic energy term are considered over the field of rational numbers. Arguments are given to support the introduction of notions from basic number theory within this context. The observations found to be consistent with this minimization are statistically significant. The genetic code may well have been optimized according to energetic criteria so as to improve folding and dynamic properties of polypeptide chains. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  10. On the optimality of code options for a universal noiseless coder

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Rice, Robert F.; Miller, Warner

    1991-01-01

    A universal noiseless coding structure was developed that provides efficient performance over an extremely broad range of source entropy. This is accomplished by adaptively selecting the best of several easily implemented variable length coding algorithms. Custom VLSI coder and decoder modules capable of processing over 20 million samples per second are currently under development. The first of the code options used in this module development is shown to be equivalent to a class of Huffman code under the Humblet condition, other options are shown to be equivalent to the Huffman codes of a modified Laplacian symbol set, at specified symbol entropy values. Simulation results are obtained on actual aerial imagery, and they confirm the optimality of the scheme. On sources having Gaussian or Poisson distributions, coder performance is also projected through analysis and simulation.

  11. Explanation of how to run the global local optimization code (GLO) to find surface heat flux

    SciTech Connect

    Aceves, S; Sahai, V; Stein, W

    1999-03-01

    From the evaluation[1] of the inverse techniques available, it was determined that the Global Local Optimization Code[2] can determine the surface heat flux using known experimental data at various points in the geometry. This code uses a whole domain approach in which an analysis code (such as TOPAZ2D or ABAQUS) can be run to get the appropriate data needed to minimize the heat flux function. This document is a compilation of our notes on how to run this code to find the surface heat flux. First, the code is described and the overall set-up procedure is reviewed. Then, creation of the configuration file is described. A specific configuration file is given with appropriate explanation. Using this information, the reader should be able to run GLO to find the surface heat flux.

  12. Power optimization of wireless media systems with space-time block codes.

    PubMed

    Yousefi'zadeh, Homayoun; Jafarkhani, Hamid; Moshfeghi, Mehran

    2004-07-01

    We present analytical and numerical solutions to the problem of power control in wireless media systems with multiple antennas. We formulate a set of optimization problems aimed at minimizing total power consumption of wireless media systems subject to a given level of QoS and an available bit rate. Our formulation takes into consideration the power consumption related to source coding, channel coding, and transmission of multiple-transmit antennas. In our study, we consider Gauss-Markov and video source models, Rayleigh fading channels along with the Bernoulli/Gilbert-Elliott loss models, and space-time block codes.

  13. DOPEX-1D2C: A one-dimensional, two-constraint radiation shield optimization code

    NASA Technical Reports Server (NTRS)

    Lahti, G. P.

    1973-01-01

    A one-dimensional, two-constraint radiation sheild weight optimization procedure and a computer program, DOPEX-1D2C, is described. The DOPEX-1D2C uses the steepest descent method to alter a set of initial (input) thicknesses of a spherical shield configuration to achieve a minimum weight while simultaneously satisfying two dose-rate constraints. The code assumes an exponential dose-shield thickness relation with parameters specified by the user. Code input instruction, a FORTRAN-4 listing, and a sample problem are given. Typical computer time required to optimize a seven-layer shield is less than 1/2 minute on an IBM 7094.

  14. Optimized perfect reconstruction quadrature mirror filter (PR-QMF) based codes for multi-user communications

    NASA Astrophysics Data System (ADS)

    Hetling, Kenneth J.; Saulnier, Gary J.; Das, Pankaj K.

    1995-04-01

    In communications systems, the message signal is sometimes spread over a large bandwidth in order to realize performance gains in the presence of narrowband interference, multipath propagation, and multiuser interference. The extent to which performance is improved is highly dependent upon the spreading code implemented. Traditionally, the spreading codes have consisted of pseudo-noise (PN) sequences whose chip values are limited to bipolar values. Recently, however, alternatives to the PN sequences have been studied including wavelet based and PR-QMF based spreading codes. The spreading codes implemented are the basis functions of a particular wavelet transform or PR-QMF bank. Since the choice of available basis functions is much larger than that of PN sequences, it is hoped that better performance can be achieved by choosing a basis tailored to the system requirements mentioned above. In this paper, a design method is presented to construct a PR-QMF bank which will generate spreading codes optimized for operating in a multiuser interference environment. Objective functions are developed for the design criteria and a multivariable constrained optimization problem is employed to generate the coefficients used in the filter bank. Once the filter bank is complete, the spreading codes are extracted and implemented in the spread spectrum system. System bit error rate (BER) curves are generated from computer simulation for analysis. Curves are generated for both the single user and the CDMA environment and performance is compared to that attained using gold codes.

  15. DENSE MEDIA CYCLONE OPTIMIZATION

    SciTech Connect

    Gerald H. Luttrell

    2003-09-09

    All technical project activities have been successfully completed. This effort included (1) completion of field testing using density tracers, (2) development of a spreadsheet based HMC simulation program, and (3) preparation of a menu-driven expert system for HMC trouble-shooting. The final project report is now being prepared for submission to DOE comment and review. The submission has been delayed due to difficulties in compiling the large base of technical information generated by the project. Technical personnel are now working to complete this report. Effort is being underway to finalize the financial documents necessary to demonstrate that the cost-sharing requirements for the project have been met.

  16. DENSE MEDIA CYCLONE OPTIMIZATION

    SciTech Connect

    Gerald H. Luttrell

    2003-01-15

    All technical project activities have been successfully completed. This effort included (1) completion of field testing using density tracers, (2) development of a spreadsheet based HMC simulation program, and (3) preparation of a menu-driven expert system for HMC trouble-shooting. The final project report is now being prepared for submission to DOE comment and review. The submission has been delayed due to difficulties in compiling the large base of technical information generated by the project. Technical personnel are now working to complete this report. Effort is being underway to finalize the financial documents necessary to demonstrate that the cost-sharing requirements for the project have been met.

  17. DENSE MEDIA CYCLONE OPTIMIZATION

    SciTech Connect

    Gerald H. Luttrell

    2002-09-14

    All project activities are now winding down. Follow-up tracer tests were conducted at several of the industrial test sites and analysis of the experimental data is currently underway. All required field work was completed during this quarter. In addition, the heavy medium cyclone simulation and expert system programs are nearly completed and user manuals are being prepared. Administrative activities (e.g., project documents, cost-sharing accounts, etc.) are being reviewed and prepared for final submission to DOE. All project reporting requirements are up to date. All financial expenditures are within approved limits.

  18. DENSE MEDIA CYCLONE OPTIMIZATION

    SciTech Connect

    Gerald H. Luttrell

    2001-09-10

    The fieldwork associated with Task 1 (Baseline Assessment) was completed this quarter. Detailed cyclone inspections completed at all but one plant during maintenance shifts. Analysis of the test samples is also currently underway in Task 4 (Sample Analysis). A Draft Recommendation was prepared for the management at each test site in Task 2 (Circuit Modification). All required procurements were completed. Density tracers were manufactured and tested for quality control purposes. Special sampling tools were also purchased and/or fabricated for each plant site. The preliminary experimental data show that the partitioning performance for all seven HMC circuits was generally good. This was attributed to well-maintained cyclones and good operating practices. However, the density tracers detected that most circuits suffered from poor control of media cutpoint. These problems were attributed to poor x-ray calibration and improper manual density measurements. These conclusions will be validated after the analyses of the composite samples have been completed.

  19. DENSE MEDIA CYCLONE OPTIMIZATION

    SciTech Connect

    David M. Hyman

    2002-01-14

    All work associated with Task 1 (Baseline Assessment) was successfully completed and preliminary corrections/recommendations were provided back to the management at each test site. Detailed float-sink tests were completed for Site No.1 and are currently underway for Sites No.2-No. 4. Unfortunately, the work associated with sample analyses (Task 4--Sample Analysis) has been delayed because of a backlog of coal samples at the commercial laboratory participating in this project. As a result, a no-cost project time extension may be necessary in order to complete the project. A decision will be made at the end of the next reporting period. Some of the work completed this quarter included (i) development of mass balance routines for data analysis, (ii) formulation of an expert system rule base, (iii) completion of statistical computations and mathematical curve fits for the density tracer test data. In addition, an ''O & M Checklist'' was prepared to provide plant operators with simple operating and maintenance guidelines that must be followed to obtain good HMC performance.

  20. Optimizations of the energy grid search algorithm in continuous-energy Monte Carlo particle transport codes

    NASA Astrophysics Data System (ADS)

    Walsh, Jonathan A.; Romano, Paul K.; Forget, Benoit; Smith, Kord S.

    2015-11-01

    In this work we propose, implement, and test various optimizations of the typical energy grid-cross section pair lookup algorithm in Monte Carlo particle transport codes. The key feature common to all of the optimizations is a reduction in the length of the vector of energies that must be searched when locating the index of a particle's current energy. Other factors held constant, a reduction in energy vector length yields a reduction in CPU time. The computational methods we present here are physics-informed. That is, they are designed to utilize the physical information embedded in a simulation in order to reduce the length of the vector to be searched. More specifically, the optimizations take advantage of information about scattering kinematics, neutron cross section structure and data representation, and also the expected characteristics of a system's spatial flux distribution and energy spectrum. The methods that we present are implemented in the OpenMC Monte Carlo neutron transport code as part of this work. The gains in computational efficiency, as measured by overall code speedup, associated with each of the optimizations are demonstrated in both serial and multithreaded simulations of realistic systems. Depending on the system, simulation parameters, and optimization method employed, overall code speedup factors of 1.2-1.5, relative to the typical single-nuclide binary search algorithm, are routinely observed.

  1. Determination of an optimal unit pulse response function using real-coded genetic algorithm

    NASA Astrophysics Data System (ADS)

    Jain, Ashu; Srinivasalu, Sanaga; Bhattacharjya, Rajib Kumar

    2005-03-01

    This paper presents the results of employing a real-coded genetic algorithm (GA) to the problem of determining the optimal unit pulse response function (UPRF) using the historical data from watersheds. The existing linear programming (LP) formulation has been modified, and a new problem formulation is proposed. The proposed problem formulation consists of fewer decision variables, only one constraint, and a non-linear objective function. The proposed problem formulation can be used to determine an optimal UPRF of a watershed from a single storm or a composite UPRF from multiple storms. The proposed problem formulation coupled with the solution technique of real-coded GA is tested using the effective rainfall and runoff data derived from two different watersheds and the results are compared with those reported earlier by others using LP methods. The model performance is evaluated using a wide range of standard statistical measures. The results obtained in this study indicate that the real-coded GA can be a suitable alternative to the problem of determining an optimal UPRF from a watershed. The proposed problem formulation when solved using real-coded GA resulted in smoother optimal UPRF without the need of additional constraints. The proposed problem formulation can be particularly useful in determining the optimal composite UPRF from multiple storms in large watersheds having large time bases due to its limited number of decision variables and constraints.

  2. Optimization technique of wavefront coding system based on ZEMAX externally compiled programs

    NASA Astrophysics Data System (ADS)

    Han, Libo; Dong, Liquan; Liu, Ming; Zhao, Yuejin; Liu, Xiaohua

    2016-10-01

    Wavefront coding technique as a means of athermalization applied to infrared imaging system, the design of phase plate is the key to system performance. This paper apply the externally compiled programs of ZEMAX to the optimization of phase mask in the normal optical design process, namely defining the evaluation function of wavefront coding system based on the consistency of modulation transfer function (MTF) and improving the speed of optimization by means of the introduction of the mathematical software. User write an external program which computes the evaluation function on account of the powerful computing feature of the mathematical software in order to find the optimal parameters of phase mask, and accelerate convergence through generic algorithm (GA), then use dynamic data exchange (DDE) interface between ZEMAX and mathematical software to realize high-speed data exchanging. The optimization of the rotational symmetric phase mask and the cubic phase mask have been completed by this method, the depth of focus increases nearly 3 times by inserting the rotational symmetric phase mask, while the other system with cubic phase mask can be increased to 10 times, the consistency of MTF decrease obviously, the maximum operating temperature of optimized system range between -40°-60°. Results show that this optimization method can be more convenient to define some unconventional optimization goals and fleetly to optimize optical system with special properties due to its externally compiled function and DDE, there will be greater significance for the optimization of unconventional optical system.

  3. Next-generation acceleration and code optimization for light transport in turbid media using GPUs.

    PubMed

    Alerstam, Erik; Lo, William Chun Yip; Han, Tianyi David; Rose, Jonathan; Andersson-Engels, Stefan; Lilge, Lothar

    2010-09-01

    A highly optimized Monte Carlo (MC) code package for simulating light transport is developed on the latest graphics processing unit (GPU) built for general-purpose computing from NVIDIA - the Fermi GPU. In biomedical optics, the MC method is the gold standard approach for simulating light transport in biological tissue, both due to its accuracy and its flexibility in modelling realistic, heterogeneous tissue geometry in 3-D. However, the widespread use of MC simulations in inverse problems, such as treatment planning for PDT, is limited by their long computation time. Despite its parallel nature, optimizing MC code on the GPU has been shown to be a challenge, particularly when the sharing of simulation result matrices among many parallel threads demands the frequent use of atomic instructions to access the slow GPU global memory. This paper proposes an optimization scheme that utilizes the fast shared memory to resolve the performance bottleneck caused by atomic access, and discusses numerous other optimization techniques needed to harness the full potential of the GPU. Using these techniques, a widely accepted MC code package in biophotonics, called MCML, was successfully accelerated on a Fermi GPU by approximately 600x compared to a state-of-the-art Intel Core i7 CPU. A skin model consisting of 7 layers was used as the standard simulation geometry. To demonstrate the possibility of GPU cluster computing, the same GPU code was executed on four GPUs, showing a linear improvement in performance with an increasing number of GPUs. The GPU-based MCML code package, named GPU-MCML, is compatible with a wide range of graphics cards and is released as an open-source software in two versions: an optimized version tuned for high performance and a simplified version for beginners (http://code.google.com/p/gpumcml).

  4. Next-generation acceleration and code optimization for light transport in turbid media using GPUs

    PubMed Central

    Alerstam, Erik; Lo, William Chun Yip; Han, Tianyi David; Rose, Jonathan; Andersson-Engels, Stefan; Lilge, Lothar

    2010-01-01

    A highly optimized Monte Carlo (MC) code package for simulating light transport is developed on the latest graphics processing unit (GPU) built for general-purpose computing from NVIDIA - the Fermi GPU. In biomedical optics, the MC method is the gold standard approach for simulating light transport in biological tissue, both due to its accuracy and its flexibility in modelling realistic, heterogeneous tissue geometry in 3-D. However, the widespread use of MC simulations in inverse problems, such as treatment planning for PDT, is limited by their long computation time. Despite its parallel nature, optimizing MC code on the GPU has been shown to be a challenge, particularly when the sharing of simulation result matrices among many parallel threads demands the frequent use of atomic instructions to access the slow GPU global memory. This paper proposes an optimization scheme that utilizes the fast shared memory to resolve the performance bottleneck caused by atomic access, and discusses numerous other optimization techniques needed to harness the full potential of the GPU. Using these techniques, a widely accepted MC code package in biophotonics, called MCML, was successfully accelerated on a Fermi GPU by approximately 600x compared to a state-of-the-art Intel Core i7 CPU. A skin model consisting of 7 layers was used as the standard simulation geometry. To demonstrate the possibility of GPU cluster computing, the same GPU code was executed on four GPUs, showing a linear improvement in performance with an increasing number of GPUs. The GPU-based MCML code package, named GPU-MCML, is compatible with a wide range of graphics cards and is released as an open-source software in two versions: an optimized version tuned for high performance and a simplified version for beginners (http://code.google.com/p/gpumcml). PMID:21258498

  5. Design and Testing of a Generalized Reduced Gradient Code for Nonlinear Optimization

    DTIC Science & Technology

    1975-03-01

    Case of Nonlinear Constraints" in Optimization , R. Fletcher, Ed., Academic Press, 1969, pp. 37-47 3. J. Abadie, "Application of the GRG Algorithm to...139130 DESIGN AND TESTING OF A GENE .WLIZED REDUCED GRADIENT CODE U IFOR NONLINEAR OPTIMIZATION ) :I I J by Leon S. Lasdon Allan D.Waren Arvind Jain...Methods are algorithms for solving nonlinear programs of general structure. An earlier paper 1--’ discussed the basic principles of GRG and

  6. Optimal Multicarrier Phase-Coded Waveform Design for Detection of Extended Targets

    SciTech Connect

    Sen, Satyabrata; Glover, Charles Wayne

    2013-01-01

    We design a parametric multicarrier phase-coded (MCPC) waveform that achieves the optimal performance in detecting an extended target in the presence of signal-dependent interference. Traditional waveform design techniques provide only the optimal energy spectral density of the transmit waveform and suffer a performance loss in the synthesis process of the time-domain signal. Therefore, we opt for directly designing an MCPC waveform in terms of its time-frequency codes to obtain the optimal detection performance. First, we describe the modeling assumptions considering an extended target buried within the signal-dependent clutter with known power spectral density, and deduce the performance characteristics of the optimal detector. Then, considering an MCPC signal transmission, we express the detection characteristics in terms of the phase-codes of the MCPC waveform and propose to optimally design the MCPC signal by maximizing the detection probability. Our numerical results demonstrate that the designed MCPC signal attains the optimal detection performance and requires a lesser computational time than the other parametric waveform design approach.

  7. User’s Manual for Solid Propulsion Optimization Code (SPOC). Volume I. Technical Description

    DTIC Science & Technology

    1981-08-01

    COF &EPCOiT & PIRIOD COVERED User’s Manual for Solid Propulsion User’s Guide Optimization Code (SPOC) Z8 Mar 80 - 21 Aug 81 Volumne I - Technical...trinitramine Rate Catalyst RCATS FeZO -- Iron Oxide (Sblid) FCH -- Ferrocene Rate Catalyst RCATL None available at the present (Liquid) Combustion STAB

  8. A coded aperture imaging system optimized for hard X-ray and gamma ray astronomy

    NASA Technical Reports Server (NTRS)

    Gehrels, N.; Cline, T. L.; Huters, A. F.; Leventhal, M.; Maccallum, C. J.; Reber, J. D.; Stang, P. D.; Teegarden, B. J.; Tueller, J.

    1985-01-01

    A coded aperture imaging system was designed for the Gamma-Ray imaging spectrometer (GRIS). The system is optimized for imaging 511 keV positron-annihilation photons. For a galactic center 511-keV source strength of 0.001 sq/s, the source location accuracy is expected to be + or - 0.2 deg.

  9. Optimizing the use of a sensor resource for opponent polarization coding

    PubMed Central

    Heras, Francisco J.H.

    2017-01-01

    Flies use specialized photoreceptors R7 and R8 in the dorsal rim area (DRA) to detect skylight polarization. R7 and R8 form a tiered waveguide (central rhabdomere pair, CRP) with R7 on top, filtering light delivered to R8. We examine how the division of a given resource, CRP length, between R7 and R8 affects their ability to code polarization angle. We model optical absorption to show how the length fractions allotted to R7 and R8 determine the rates at which they transduce photons, and correct these rates for transduction unit saturation. The rates give polarization signal and photon noise in R7, and in R8. Their signals are combined in an opponent unit, intrinsic noise added, and the unit’s output analysed to extract two measures of coding ability, number of discriminable polarization angles and mutual information. A very long R7 maximizes opponent signal amplitude, but codes inefficiently due to photon noise in the very short R8. Discriminability and mutual information are optimized by maximizing signal to noise ratio, SNR. At lower light levels approximately equal lengths of R7 and R8 are optimal because photon noise dominates. At higher light levels intrinsic noise comes to dominate and a shorter R8 is optimum. The optimum R8 length fractions falls to one third. This intensity dependent range of optimal length fractions corresponds to the range observed in different fly species and is not affected by transduction unit saturation. We conclude that a limited resource, rhabdom length, can be divided between two polarization sensors, R7 and R8, to optimize opponent coding. We also find that coding ability increases sub-linearly with total rhabdom length, according to the law of diminishing returns. Consequently, the specialized shorter central rhabdom in the DRA codes polarization twice as efficiently with respect to rhabdom length than the longer rhabdom used in the rest of the eye. PMID:28316880

  10. An intermediate language and machine-independent optimization issues in automatic code generation for vector processors

    SciTech Connect

    Youssefi, A.S.

    1989-01-01

    The underlying architecture of a vector processor is different from that of a sequential machine. Therefore, new software must be developed which run on these machines. High-level languages for vector processors can be classified into two groups. The first includes sequential languages such as Fortran, Pascal, C, etc. These languages do not explicitly specify vector operations; hence, it becomes the task of the compiler to construct vector operations from the sequential code. The second group simplifies the task of compiler by allowing the programmer to explicitly specify vector operations. In the process of implementing linear algebra operations such as matrix multiplication and first-order linear recurrences in current high-level languages and generating code on different vector processors, the author sees that efficient code cannot always be generated. By identifying these operations at the intermediate level, he shows how to generate optimal sequences of instructions for a particular machine. He specifies a vector intermediate language, VIL, as well as machine-independent optimization issues for vector processors. He explains how efficient code can be generated from these intermediate operations to different machines. He shows how these intermediate operations can be used in a model of automatic code generation. In machine-independent optimization he explores four issues. Firstly, he considers the application of known optimization techniques for sequential machines to vector processors. Secondly, he discusses new optimization techniques that can be applied to intermediate operations. Thirdly, he discusses the application of lazy evaluation to operations such as rotation and transposition at the intermediate level. Finally, he shows that when the parser can determine some idioms about the shape or the contents of a matrix or a vector, then a considerable amount of space and time can be saved.

  11. A wavelet-based neural model to optimize and read out a temporal population code

    PubMed Central

    Luvizotto, Andre; Rennó-Costa, César; Verschure, Paul F. M. J.

    2012-01-01

    It has been proposed that the dense excitatory local connectivity of the neo-cortex plays a specific role in the transformation of spatial stimulus information into a temporal representation or a temporal population code (TPC). TPC provides for a rapid, robust, and high-capacity encoding of salient stimulus features with respect to position, rotation, and distortion. The TPC hypothesis gives a functional interpretation to a core feature of the cortical anatomy: its dense local and sparse long-range connectivity. Thus far, the question of how the TPC encoding can be decoded in downstream areas has not been addressed. Here, we present a neural circuit that decodes the spectral properties of the TPC using a biologically plausible implementation of a Haar transform. We perform a systematic investigation of our model in a recognition task using a standardized stimulus set. We consider alternative implementations using either regular spiking or bursting neurons and a range of spectral bands. Our results show that our wavelet readout circuit provides for the robust decoding of the TPC and further compresses the code without loosing speed or quality of decoding. We show that in the TPC signal the relevant stimulus information is present in the frequencies around 100 Hz. Our results show that the TPC is constructed around a small number of coding components that can be well decoded by wavelet coefficients in a neuronal implementation. The solution to the TPC decoding problem proposed here suggests that cortical processing streams might well consist of sequential operations where spatio-temporal transformations at lower levels forming a compact stimulus encoding using TPC that are subsequently decoded back to a spatial representation using wavelet transforms. In addition, the results presented here show that different properties of the stimulus might be transmitted to further processing stages using different frequency components that are captured by appropriately tuned

  12. Design and optimization of a portable LQCD Monte Carlo code using OpenACC

    NASA Astrophysics Data System (ADS)

    Bonati, Claudio; Coscetti, Simone; D'Elia, Massimo; Mesiti, Michele; Negro, Francesco; Calore, Enrico; Schifano, Sebastiano Fabio; Silvi, Giorgio; Tripiccione, Raffaele

    The present panorama of HPC architectures is extremely heterogeneous, ranging from traditional multi-core CPU processors, supporting a wide class of applications but delivering moderate computing performance, to many-core Graphics Processor Units (GPUs), exploiting aggressive data-parallelism and delivering higher performances for streaming computing applications. In this scenario, code portability (and performance portability) become necessary for easy maintainability of applications; this is very relevant in scientific computing where code changes are very frequent, making it tedious and prone to error to keep different code versions aligned. In this work, we present the design and optimization of a state-of-the-art production-level LQCD Monte Carlo application, using the directive-based OpenACC programming model. OpenACC abstracts parallel programming to a descriptive level, relieving programmers from specifying how codes should be mapped onto the target architecture. We describe the implementation of a code fully written in OpenAcc, and show that we are able to target several different architectures, including state-of-the-art traditional CPUs and GPUs, with the same code. We also measure performance, evaluating the computing efficiency of our OpenACC code on several architectures, comparing with GPU-specific implementations and showing that a good level of performance-portability can be reached.

  13. Liner Optimization Studies Using the Ducted Fan Noise Prediction Code TBIEM3D

    NASA Technical Reports Server (NTRS)

    Dunn, M. H.; Farassat, F.

    1998-01-01

    In this paper we demonstrate the usefulness of the ducted fan noise prediction code TBIEM3D as a liner optimization design tool. Boundary conditions on the interior duct wall allow for hard walls or a locally reacting liner with axially segmented, circumferentially uniform impedance. Two liner optimization studies are considered in which farfield noise attenuation due to the presence of a liner is maximized by adjusting the liner impedance. In the first example, the dependence of optimal liner impedance on frequency and liner length is examined. Results show that both the optimal impedance and attenuation levels are significantly influenced by liner length and frequency. In the second example, TBIEM3D is used to compare radiated sound pressure levels between optimal and non-optimal liner cases at conditions designed to simulate take-off. It is shown that significant noise reduction is achieved for most of the sound field by selecting the optimal or near optimal liner impedance. Our results also indicate that there is relatively large region of the impedance plane over which optimal or near optimal liner behavior is attainable. This is an important conclusion for the designer since there are variations in liner characteristics due to manufacturing imprecisions.

  14. Video coding using arbitrarily shaped block partitions in globally optimal perspective

    NASA Astrophysics Data System (ADS)

    Paul, Manoranjan; Murshed, Manzur

    2011-12-01

    Algorithms using content-based patterns to segment moving regions at the macroblock (MB) level have exhibited good potential for improved coding efficiency when embedded into the H.264 standard as an extra mode. The content-based pattern generation (CPG) algorithm provides local optimal result as only one pattern can be optimally generated from a given set of moving regions. But, it failed to provide optimal results for multiple patterns from entire sets. Obviously, a global optimal solution for clustering the set and then generation of multiple patterns enhances the performance farther. But a global optimal solution is not achievable due to the non-polynomial nature of the clustering problem. In this paper, we propose a near- optimal content-based pattern generation (OCPG) algorithm which outperforms the existing approach. Coupling OCPG, generating a set of patterns after clustering the MBs into several disjoint sets, with a direct pattern selection algorithm by allowing all the MBs in multiple pattern modes outperforms the existing pattern-based coding when embedded into the H.264.

  15. Final Report A Multi-Language Environment For Programmable Code Optimization and Empirical Tuning

    SciTech Connect

    Yi, Qing; Whaley, Richard Clint; Qasem, Apan; Quinlan, Daniel

    2013-11-23

    This report summarizes our effort and results of building an integrated optimization environment to effectively combine the programmable control and the empirical tuning of source-to-source compiler optimizations within the framework of multiple existing languages, specifically C, C++, and Fortran. The environment contains two main components: the ROSE analysis engine, which is based on the ROSE C/C++/Fortran2003 source-to-source compiler developed by Co-PI Dr.Quinlan et. al at DOE/LLNL, and the POET transformation engine, which is based on an interpreted program transformation language developed by Dr. Yi at University of Texas at San Antonio (UTSA). The ROSE analysis engine performs advanced compiler analysis, identifies profitable code transformations, and then produces output in POET, a language designed to provide programmable control of compiler optimizations to application developers and to support the parameterization of architecture-sensitive optimizations so that their configurations can be empirically tuned later. This POET output can then be ported to different machines together with the user application, where a POET-based search engine empirically reconfigures the parameterized optimizations until satisfactory performance is found. Computational specialists can write POET scripts to directly control the optimization of their code. Application developers can interact with ROSE to obtain optimization feedback as well as provide domain-specific knowledge and high-level optimization strategies. The optimization environment is expected to support different levels of automation and programmer intervention, from fully-automated tuning to semi-automated development and to manual programmable control.

  16. The SWAN/NPSOL code system for multivariable multiconstraint shield optimization

    SciTech Connect

    Watkins, E.F.; Greenspan, E.

    1995-12-31

    SWAN is a useful code for optimization of source-driven systems, i.e., systems for which the neutron and photon distribution is the solution of the inhomogeneous transport equation. Over the years, SWAN has been applied to the optimization of a variety of nuclear systems, such as minimizing the thickness of fusion reactor blankets and shields, the weight of space reactor shields, the cost for an ICF target chamber shield, and the background radiation for explosive detection systems and maximizing the beam quality for boron neutron capture therapy applications. However, SWAN`s optimization module can handle up to a single constraint and was inefficient in handling problems with many variables. The purpose of this work is to upgrade SWAN`s optimization capability.

  17. Optimal performance of networked control systems with bandwidth and coding constraints.

    PubMed

    Zhan, Xi-Sheng; Sun, Xin-xiang; Li, Tao; Wu, Jie; Jiang, Xiao-Wei

    2015-11-01

    The optimal tracking performance of multiple-input multiple-output (MIMO) discrete-time networked control systems with bandwidth and coding constraints is studied in this paper. The optimal tracking performance of networked control system is obtained by using spectral factorization technique and partial fraction. The obtained results demonstrate that the optimal performance is influenced by the directions and locations of the nonminimum phase zeros and unstable poles of the given plant. In addition to that, the characters of the reference signal, encoding, the bandwidth and additive white Gaussian noise (AWGN) of the communication channel are also closely influenced by the optimal tracking performance. Some typical examples are given to illustrate the theoretical results.

  18. From Physics Model to Results: An Optimizing Framework for Cross-Architecture Code Generation

    DOE PAGES

    Blazewicz, Marek; Hinder, Ian; Koppelman, David M.; ...

    2013-01-01

    Starting from a high-level problem description in terms of partial differential equations using abstract tensor notation, the Chemora framework discretizes, optimizes, and generates complete high performance codes for a wide range of compute architectures. Chemora extends the capabilities of Cactus, facilitating the usage of large-scale CPU/GPU systems in an efficient manner for complex applications, without low-level code tuning. Chemora achieves parallelism through MPI and multi-threading, combining OpenMP and CUDA. Optimizations include high-level code transformations, efficient loop traversal strategies, dynamically selected data and instruction cache usage strategies, and JIT compilation of GPU code tailored to the problem characteristics. The discretizationmore » is based on higher-order finite differences on multi-block domains. Chemora's capabilities are demonstrated by simulations of black hole collisions. This problem provides an acid test of the framework, as the Einstein equations contain hundreds of variables and thousands of terms.« less

  19. A unified framework of unsupervised subjective optimized bit allocation for multiple video object coding

    NASA Astrophysics Data System (ADS)

    Chen, Zhenzhong; Han, Junwei; Ngan, King Ngi

    2005-10-01

    MPEG-4 treats a scene as a composition of several objects or so-called video object planes (VOPs) that are separately encoded and decoded. Such a flexible video coding framework makes it possible to code different video object with different distortion scale. It is necessary to analyze the priority of the video objects according to its semantic importance, intrinsic properties and psycho-visual characteristics such that the bit budget can be distributed properly to video objects to improve the perceptual quality of the compressed video. This paper aims to provide an automatic video object priority definition method based on object-level visual attention model and further propose an optimization framework for video object bit allocation. One significant contribution of this work is that the human visual system characteristics are incorporated into the video coding optimization process. Another advantage is that the priority of the video object can be obtained automatically instead of fixing weighting factors before encoding or relying on the user interactivity. To evaluate the performance of the proposed approach, we compare it with traditional verification model bit allocation and the optimal multiple video object bit allocation algorithms. Comparing with traditional bit allocation algorithms, the objective quality of the object with higher priority is significantly improved under this framework. These results demonstrate the usefulness of this unsupervised subjective quality lifting framework.

  20. ETRANS: an energy transport system optimization code for distributed networks of solar collectors

    SciTech Connect

    Barnhart, J.S.

    1980-09-01

    The optimization code ETRANS was developed at the Pacific Northwest Laboratory to design and estimate the costs associated with energy transport systems for distributed fields of solar collectors. The code uses frequently cited layouts for dish and trough collectors and optimizes them on a section-by-section basis. The optimal section design is that combination of pipe diameter and insulation thickness that yields the minimum annualized system-resultant cost. Among the quantities included in the costing algorithm are (1) labor and materials costs associated with initial plant construction, (2) operating expenses due to daytime and nighttime heat losses, and (3) operating expenses due to pumping power requirements. Two preliminary series of simulations were conducted to exercise the code. The results indicate that transport system costs for both dish and trough collector fields increase with field size and receiver exit temperature. Furthermore, dish collector transport systems were found to be much more expensive to build and operate than trough transport systems. ETRANS itself is stable and fast-running and shows promise of being a highly effective tool for the analysis of distributed solar thermal systems.

  1. An application of anti-optimization in the process of validating aerodynamic codes

    NASA Astrophysics Data System (ADS)

    Cruz, Juan R.

    An investigation was conducted to assess the usefulness of anti-optimization in the process of validating of aerodynamic codes. Anti-optimization is defined here as the intentional search for regions where the computational and experimental results disagree. Maximizing such disagreements can be a useful tool in uncovering errors and/or weaknesses in both analyses and experiments. The codes chosen for this investigation were an airfoil code and a lifting line code used together as an analysis to predict three-dimensional wing aerodynamic coefficients. The parameter of interest was the maximum lift coefficient of the three-dimensional wing, CL max. The test domain encompassed Mach numbers from 0.3 to 0.8, and Reynolds numbers from 25,000 to 250,000. A simple rectangular wing was designed for the experiment. A wind tunnel model of this wing was built and tested in the NASA Langley Transonic Dynamics Tunnel. Selection of the test conditions (i.e., Mach and Reynolds numbers) were made by applying the techniques of response surface methodology and considerations involving the predicted experimental uncertainty. The test was planned and executed in two phases. In the first phase runs were conducted at the pre-planned test conditions. Based on these results additional runs were conducted in areas where significant differences in CL max were observed between the computational results and the experiment---in essence applying the concept of anti-optimization. These additional runs were used to verify the differences in CL max and assess the extent of the region where these differences occurred. The results of the experiment showed that the analysis was capable of predicting CL max to within 0.05 over most of the test domain. The application of anti-optimization succeeded in identifying a region where the computational and experimental values of C L max differed by more than 0.05, demonstrating the usefulness of anti-optimization in process of validating aerodynamic codes

  2. MPEG-2/4 Low-Complexity Advanced Audio Coding Optimization and Implementation on DSP

    NASA Astrophysics Data System (ADS)

    Wu, Bing-Fei; Huang, Hao-Yu; Chen, Yen-Lin; Peng, Hsin-Yuan; Huang, Jia-Hsiung

    This study presents several optimization approaches for the MPEG-2/4 Audio Advanced Coding (AAC) Low Complexity (LC) encoding and decoding processes. Considering the power consumption and the peripherals required for consumer electronics, this study adopts the TI OMAP5912 platform for portable devices. An important optimization issue for implementing AAC codec on embedded and mobile devices is to reduce computational complexity and memory consumption. Due to power saving issues, most embedded and mobile systems can only provide very limited computational power and memory resources for the coding process. As a result, modifying and simplifying only one or two blocks is insufficient for optimizing the AAC encoder and enabling it to work well on embedded systems. It is therefore necessary to enhance the computational efficiency of other important modules in the encoding algorithm. This study focuses on optimizing the Temporal Noise Shaping (TNS), Mid/Side (M/S) Stereo, Modified Discrete Cosine Transform (MDCT) and Inverse Quantization (IQ) modules in the encoder and decoder. Furthermore, we also propose an efficient memory reduction approach that provides a satisfactory balance between the reduction of memory usage and the expansion of the encoded files. In the proposed design, both the AAC encoder and decoder are built with fixed-point arithmetic operations and implemented on a DSP processor combined with an ARM-core for peripheral controlling. Experimental results demonstrate that the proposed AAC codec is computationally effective, has low memory consumption, and is suitable for low-cost embedded and mobile applications.

  3. Development of free-piston Stirling engine performance and optimization codes based on Martini simulation technique

    NASA Technical Reports Server (NTRS)

    Martini, William R.

    1989-01-01

    A FORTRAN computer code is described that could be used to design and optimize a free-displacer, free-piston Stirling engine similar to the RE-1000 engine made by Sunpower. The code contains options for specifying displacer and power piston motion or for allowing these motions to be calculated by a force balance. The engine load may be a dashpot, inertial compressor, hydraulic pump or linear alternator. Cycle analysis may be done by isothermal analysis or adiabatic analysis. Adiabatic analysis may be done using the Martini moving gas node analysis or the Rios second-order Runge-Kutta analysis. Flow loss and heat loss equations are included. Graphical display of engine motions and pressures and temperatures are included. Programming for optimizing up to 15 independent dimensions is included. Sample performance results are shown for both specified and unconstrained piston motions; these results are shown as generated by each of the two Martini analyses. Two sample optimization searches are shown using specified piston motion isothermal analysis. One is for three adjustable input and one is for four. Also, two optimization searches for calculated piston motion are presented for three and for four adjustable inputs. The effect of leakage is evaluated. Suggestions for further work are given.

  4. On the Efficacy of Source Code Optimizations for Cache-Based Systems

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Saphir, William C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    Obtaining high performance without machine-specific tuning is an important goal of scientific application programmers. Since most scientific processing is done on commodity microprocessors with hierarchical memory systems, this goal of "portable performance" can be achieved if a common set of optimization principles is effective for all such systems. It is widely believed, or at least hoped, that portable performance can be realized. The rule of thumb for optimization on hierarchical memory systems is to maximize temporal and spatial locality of memory references by reusing data and minimizing memory access stride. We investigate the effects of a number of optimizations on the performance of three related kernels taken from a computational fluid dynamics application. Timing the kernels on a range of processors, we observe an inconsistent and often counterintuitive impact of the optimizations on performance. In particular, code variations that have a positive impact on one architecture can have a negative impact on another, and variations expected to be unimportant can produce large effects. Moreover, we find that cache miss rates-as reported by a cache simulation tool, and confirmed by hardware counters-only partially explain the results. By contrast, the compiler-generated assembly code provides more insight by revealing the importance of processor-specific instructions and of compiler maturity, both of which strongly, and sometimes unexpectedly, influence performance. We conclude that it is difficult to obtain performance portability on modern cache-based computers, and comment on the implications of this result.

  5. On the Efficacy of Source Code Optimizations for Cache-Based Systems

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Saphir, William C.

    1998-01-01

    Obtaining high performance without machine-specific tuning is an important goal of scientific application programmers. Since most scientific processing is done on commodity microprocessors with hierarchical memory systems, this goal of "portable performance" can be achieved if a common set of optimization principles is effective for all such systems. It is widely believed, or at least hoped, that portable performance can be realized. The rule of thumb for optimization on hierarchical memory systems is to maximize temporal and spatial locality of memory references by reusing data and minimizing memory access stride. We investigate the effects of a number of optimizations on the performance of three related kernels taken from a computational fluid dynamics application. Timing the kernels on a range of processors, we observe an inconsistent and often counterintuitive impact of the optimizations on performance. In particular, code variations that have a positive impact on one architecture can have a negative impact on another, and variations expected to be unimportant can produce large effects. Moreover, we find that cache miss rates - as reported by a cache simulation tool, and confirmed by hardware counters - only partially explain the results. By contrast, the compiler-generated assembly code provides more insight by revealing the importance of processor-specific instructions and of compiler maturity, both of which strongly, and sometimes unexpectedly, influence performance. We conclude that it is difficult to obtain performance portability on modern cache-based computers, and comment on the implications of this result.

  6. Optimization of Mutation Pressure in Relation to Properties of Protein-Coding Sequences in Bacterial Genomes

    PubMed Central

    Błażej, Paweł; Miasojedow, Błażej; Grabińska, Małgorzata; Mackiewicz, Paweł

    2015-01-01

    Most mutations are deleterious and require energetically costly repairs. Therefore, it seems that any minimization of mutation rate is beneficial. On the other hand, mutations generate genetic diversity indispensable for evolution and adaptation of organisms to changing environmental conditions. Thus, it is expected that a spontaneous mutational pressure should be an optimal compromise between these two extremes. In order to study the optimization of the pressure, we compared mutational transition probability matrices from bacterial genomes with artificial matrices fulfilling the same general features as the real ones, e.g., the stationary distribution and the speed of convergence to the stationarity. The artificial matrices were optimized on real protein-coding sequences based on Evolutionary Strategies approach to minimize or maximize the probability of non-synonymous substitutions and costs of amino acid replacements depending on their physicochemical properties. The results show that the empirical matrices have a tendency to minimize the effects of mutations rather than maximize their costs on the amino acid level. They were also similar to the optimized artificial matrices in the nucleotide substitution pattern, especially the high transitions/transversions ratio. We observed no substantial differences between the effects of mutational matrices on protein-coding sequences in genomes under study in respect of differently replicated DNA strands, mutational cost types and properties of the referenced artificial matrices. The findings indicate that the empirical mutational matrices are rather adapted to minimize mutational costs in the studied organisms in comparison to other matrices with similar mathematical constraints. PMID:26121655

  7. SOAR: An extensible suite of codes for weld analysis and optimal weld schedules

    SciTech Connect

    Eisler, G.R.; Fuerschbach, P.W.

    1997-07-01

    A suite of MATLAB-based code modules has been developed to provide optimal weld schedules, regulating weld process parameters for CO2 and pulse Nd:YAG laser welding methods, and arc welding in support of the Smartweld manufacturing initiative at Sandia National Laboratories. The optimization methodology consists of mixed genetic and gradient-based algorithms to query semi-empirical, nonlinear algebraic models. The optimization output provides heat-input-efficient welds for user-specified weld dimensions. User querying of all weld models is available to examine sub-optimal schedules. In addition, a heat conduction equation solver for 2-D heat flow is available to provide the user with an additional check of weld thermal effects. The inclusion of thermodynamic properties allows the extension of the empirical models to include materials other than those tested. All solution methods are provided with graphical user interfaces and display pertinent results in two and three-dimensional form. The code architecture provides an extensible framework to add an arbitrary number of modules.

  8. Finite population analysis of the effect of horizontal gene transfer on the origin of an universal and optimal genetic code

    NASA Astrophysics Data System (ADS)

    Aggarwal, Neha; Vishwa Bandhu, Ashutosh; Sengupta, Supratim

    2016-06-01

    The origin of a universal and optimal genetic code remains a compelling mystery in molecular biology and marks an essential step in the origin of DNA and protein based life. We examine a collective evolution model of genetic code origin that allows for unconstrained horizontal transfer of genetic elements within a finite population of sequences each of which is associated with a genetic code selected from a pool of primordial codes. We find that when horizontal transfer of genetic elements is incorporated in this more realistic model of code-sequence coevolution in a finite population, it can increase the likelihood of emergence of a more optimal code eventually leading to its universality through fixation in the population. The establishment of such an optimal code depends on the probability of HGT events. Only when the probability of HGT events is above a critical threshold, we find that the ten amino acid code having a structure that is most consistent with the standard genetic code (SGC) often gets fixed in the population with the highest probability. We examine how the threshold is determined by factors like the population size, length of the sequences and selection coefficient. Our simulation results reveal the conditions under which sharing of coding innovations through horizontal transfer of genetic elements may have facilitated the emergence of a universal code having a structure similar to that of the SGC.

  9. Finite population analysis of the effect of horizontal gene transfer on the origin of an universal and optimal genetic code.

    PubMed

    Aggarwal, Neha; Bandhu, Ashutosh Vishwa; Sengupta, Supratim

    2016-05-27

    The origin of a universal and optimal genetic code remains a compelling mystery in molecular biology and marks an essential step in the origin of DNA and protein based life. We examine a collective evolution model of genetic code origin that allows for unconstrained horizontal transfer of genetic elements within a finite population of sequences each of which is associated with a genetic code selected from a pool of primordial codes. We find that when horizontal transfer of genetic elements is incorporated in this more realistic model of code-sequence coevolution in a finite population, it can increase the likelihood of emergence of a more optimal code eventually leading to its universality through fixation in the population. The establishment of such an optimal code depends on the probability of HGT events. Only when the probability of HGT events is above a critical threshold, we find that the ten amino acid code having a structure that is most consistent with the standard genetic code (SGC) often gets fixed in the population with the highest probability. We examine how the threshold is determined by factors like the population size, length of the sequences and selection coefficient. Our simulation results reveal the conditions under which sharing of coding innovations through horizontal transfer of genetic elements may have facilitated the emergence of a universal code having a structure similar to that of the SGC.

  10. A optimized context-based adaptive binary arithmetic coding algorithm in progressive H.264 encoder

    NASA Astrophysics Data System (ADS)

    Xiao, Guang; Shi, Xu-li; An, Ping; Zhang, Zhao-yang; Gao, Ge; Teng, Guo-wei

    2006-05-01

    Context-based Adaptive Binary Arithmetic Coding (CABAC) is a new entropy coding method presented in H.264/AVC that is highly efficient in video coding. In the method, the probability of current symbol is estimated by using the wisely designed context model, which is adaptive and can approach to the statistic characteristic. Then an arithmetic coding mechanism largely reduces the redundancy in inter-symbol. Compared with UVLC method in the prior standard, CABAC is complicated but efficiently reduce the bit rate. Based on thorough analysis of coding and decoding methods of CABAC, This paper proposed two methods, sub-table method and stream-reuse methods, to improve the encoding efficiency implemented in H.264 JM code. In JM, the CABAC function produces bits one by one of every syntactic element. Multiplication operating times after times in the CABAC function lead to it inefficient.The proposed algorithm creates tables beforehand and then produce every bits of syntactic element. In JM, intra-prediction and inter-prediction mode selection algorithm with different criterion is based on RDO(rate distortion optimization) model. One of the parameter of the RDO model is bit rate that is produced by CABAC operator. After intra-prediction or inter-prediction mode selection, the CABAC stream is discard and is recalculated to output stream. The proposed Stream-reuse algorithm puts the stream in memory that is created in mode selection algorithm and reuses it in encoding function. Experiment results show that our proposed algorithm can averagely speed up 17 to 78 MSEL higher speed for QCIF and CIF sequences individually compared with the original algorithm of JM at the cost of only a little memory space. The CABAC was realized in our progressive h.264 encoder.

  11. The SWAN-SCALE code for the optimization of critical systems

    SciTech Connect

    Greenspan, E.; Karni, Y.; Regev, D.; Petrie, L.M.

    1999-07-01

    The SWAN optimization code was recently developed to identify the maximum value of k{sub eff} for a given mass of fissile material when in combination with other specified materials. The optimization process is iterative; in each iteration SWAN varies the zone-dependent concentration of the system constituents. This change is guided by the equal volume replacement effectiveness functions (EVREF) that SWAN generates using first-order perturbation theory. Previously, SWAN did not have provisions to account for the effect of the composition changes on neutron cross-section resonance self-shielding; it used the cross sections corresponding to the initial system composition. In support of the US Department of Energy Nuclear Criticality Safety Program, the authors recently removed the limitation on resonance self-shielding by coupling SWAN with the SCALE code package. The purpose of this paper is to briefly describe the resulting SWAN-SCALE code and to illustrate the effect that neutron cross-section self-shielding could have on the maximum k{sub eff} and on the corresponding system composition.

  12. Operationally optimal vertex-based shape coding with arbitrary direction edge encoding structures

    NASA Astrophysics Data System (ADS)

    Lai, Zhongyuan; Zhu, Junhuan; Luo, Jiebo

    2014-07-01

    The intention of shape coding in the MPEG-4 is to improve the coding efficiency as well as to facilitate the object-oriented applications, such as shape-based object recognition and retrieval. These require both efficient shape compression and effective shape description. Although these two issues have been intensively investigated in data compression and pattern recognition fields separately, it remains an open problem when both objectives need to be considered together. To achieve high coding gain, the operational rate-distortion optimal framework can be applied, but the direction restriction of the traditional eight-direction edge encoding structure reduces its compression efficiency and description effectiveness. We present two arbitrary direction edge encoding structures to relax this direction restriction. They consist of a sector number, a short component, and a long component, which represent both the direction and the magnitude information of an encoding edge. Experiments on both shape coding and hand gesture recognition validate that our structures can reduce a large number of encoding vertices and save up to 48.9% bits. Besides, the object contours are effectively described and suitable for the object-oriented applications.

  13. End-to-End Rate-Distortion Optimized MD Mode Selection for Multiple Description Video Coding

    NASA Astrophysics Data System (ADS)

    Heng, Brian A.; Apostolopoulos, John G.; Lim, Jae S.

    2006-12-01

    Multiple description (MD) video coding can be used to reduce the detrimental effects caused by transmission over lossy packet networks. A number of approaches have been proposed for MD coding, where each provides a different tradeoff between compression efficiency and error resilience. How effectively each method achieves this tradeoff depends on the network conditions as well as on the characteristics of the video itself. This paper proposes an adaptive MD coding approach which adapts to these conditions through the use of adaptive MD mode selection. The encoder in this system is able to accurately estimate the expected end-to-end distortion, accounting for both compression and packet loss-induced distortions, as well as for the bursty nature of channel losses and the effective use of multiple transmission paths. With this model of the expected end-to-end distortion, the encoder selects between MD coding modes in a rate-distortion (R-D) optimized manner to most effectively tradeoff compression efficiency for error resilience. We show how this approach adapts to both the local characteristics of the video and network conditions and demonstrates the resulting gains in performance using an H.264-based adaptive MD video coder.

  14. Dense Breasts

    MedlinePlus

    ... fatty tissue. On a mammogram, fatty tissue appears dark (radio-lucent) and the glandular and connective tissues ... white on mammography) and non-dense fatty tissue (dark on mammography) using a visual scale and assign ...

  15. Error threshold in optimal coding, numerical criteria, and classes of universalities for complexity

    NASA Astrophysics Data System (ADS)

    Saakian, David B.

    2005-01-01

    The free energy of the random energy model at the transition point between the ferromagnetic and spin glass phases is calculated. At this point, equivalent to the decoding error threshold in optimal codes, the free energy has finite size corrections proportional to the square root of the number of degrees. The response of the magnetization to an external ferromagnetic phase is maximal at values of magnetization equal to one-half. We give several criteria of complexity and define different universality classes. According to our classification, at the lowest class of complexity are random graphs, Markov models, and hidden Markov models. At the next level is the Sherrington-Kirkpatrick spin glass, connected to neuron-network models. On a higher level are critical theories, the spin glass phase of the random energy model, percolation, and self-organized criticality. The top level class involves highly optimized tolerance design, error thresholds in optimal coding, language, and, maybe, financial markets. Living systems are also related to the last class. The concept of antiresonance is suggested for complex systems.

  16. Optimizing performance of superscalar codes for a single Cray X1MSP processor

    SciTech Connect

    Shan, Hongzhang; Strohmaier, Erich; Oliker, Leonid

    2004-06-08

    The growing gap between sustained and peak performance for full-scale complex scientific applications on conventional supercomputers is a major concern in high performance computing. The recently-released vector-based Cray X1 offers to bridge this gap for many demanding scientific applications. However, this unique architecture contains both data caches and multi-streaming processing units, and the optimal programming methodology is still under investigation. In this paper we investigate Cray X1 code optimization for a suite of computational kernels originally designed for superscalar processors. For our study, we select four applications from the SPLASH2 application suite (1-D FFT,Radix, Ocean, and Nbody), two kernels from the NAS benchmark suite (3-DFFT and CG), and a matrix-matrix multiplication kernel. Results show that for many cases, the addition of vectorization compiler directives results faster runtimes. However, to achieve a significant performance improvement via increased vector length, it is often necessary to restructure the program at the source level sometimes leading to algorithmic level transformations. Additionally, memory bank conflicts may result in substantial performance losses. These conflicts can often be exacerbated when optimizing code for increased vector lengths, and must be explicitly minimized. Finally, we investigate the relationship of the X1 data caches on overall performance.

  17. A treatment planning code for inverse planning and 3D optimization in hadrontherapy.

    PubMed

    Bourhaleb, F; Marchetto, F; Attili, A; Pittà, G; Cirio, R; Donetti, M; Giordanengo, S; Givehchi, N; Iliescu, S; Krengli, M; La Rosa, A; Massai, D; Pecka, A; Pardo, J; Peroni, C

    2008-09-01

    The therapeutic use of protons and ions, especially carbon ions, is a new technique and a challenge to conform the dose to the target due to the energy deposition characteristics of hadron beams. An appropriate treatment planning system (TPS) is strictly necessary to take full advantage. We developed a TPS software, ANCOD++, for the evaluation of the optimal conformal dose. ANCOD++ is an analytical code using the voxel-scan technique as an active method to deliver the dose to the patient, and provides treatment plans with both proton and carbon ion beams. The iterative algorithm, coded in C++ and running on Unix/Linux platform, allows the determination of the best fluences of the individual beams to obtain an optimal physical dose distribution, delivering a maximum dose to the target volume and a minimum dose to critical structures. The TPS is supported by Monte Carlo simulations with the package GEANT3 to provide the necessary physical lookup tables and verify the optimized treatment plans. Dose verifications done by means of full Monte Carlo simulations show an overall good agreement with the treatment planning calculations. We stress the fact that the purpose of this work is the verification of the physical dose and a next work will be dedicated to the radiobiological evaluation of the equivalent biological dose.

  18. Three-dimensional polarization marked multiple-QR code encryption by optimizing a single vectorial beam

    NASA Astrophysics Data System (ADS)

    Lin, Chao; Shen, Xueju; Hua, Binbin; Wang, Zhisong

    2015-10-01

    We demonstrate the feasibility of three dimensional (3D) polarization multiplexing by optimizing a single vectorial beam using a multiple-signal window multiple-plane (MSW-MP) phase retrieval algorithm. Original messages represented with multiple quick response (QR) codes are first partitioned into a series of subblocks. Then, each subblock is marked with a specific polarization state and randomly distributed in 3D space with both longitudinal and transversal adjustable freedoms. A generalized 3D polarization mapping protocol is established to generate a 3D polarization key. Finally, multiple-QR code is encrypted into one phase only mask and one polarization only mask based on the modified Gerchberg-Saxton (GS) algorithm. We take the polarization mask as the cyphertext and the phase only mask as additional dimension of key. Only when both the phase key and 3D polarization key are correct, original messages can be recovered. We verify our proposal with both simulation and experiment evidences.

  19. Lossless image compression based on optimal prediction, adaptive lifting, and conditional arithmetic coding.

    PubMed

    Boulgouris, N V; Tzovaras, D; Strintzis, M G

    2001-01-01

    The optimal predictors of a lifting scheme in the general n-dimensional case are obtained and applied for the lossless compression of still images using first quincunx sampling and then simple row-column sampling. In each case, the efficiency of the linear predictors is enhanced nonlinearly. Directional postprocessing is used in the quincunx case, and adaptive-length postprocessing in the row-column case. Both methods are seen to perform well. The resulting nonlinear interpolation schemes achieve extremely efficient image decorrelation. We further investigate context modeling and adaptive arithmetic coding of wavelet coefficients in a lossless compression framework. Special attention is given to the modeling contexts and the adaptation of the arithmetic coder to the actual data. Experimental evaluation shows that the best of the resulting coders produces better results than other known algorithms for multiresolution-based lossless image coding.

  20. MINVAR: a local optimization criterion for rate-distortion tradeoff in real time video coding

    NASA Astrophysics Data System (ADS)

    Chen, Zhenzhong; Ngan, King Ngi

    2005-10-01

    In this paper, we propose a minimum variation (MINVAR) distortion criterion based approach for the rate distortion tradeoff in video coding. The MINVAR based rate distortion tradeoff framework provides a local optimization strategy as a rate control mechanism in real time video coding applications by minimizing the distortion variation while the corresponding bit rate fluctuation is limited by utilizing the encoder buffer. We use the H.264 video codec to evaluate the performance of the proposed method. As shown in the simulation results, the decoded picture quality of the proposed approach is smoother than that of the traditional H.264 joint model (JM) rate control algorithm. The global video quality, the average PSNR, is maintained while a better subjective visual quality is guaranteed.

  1. Dense array expressions

    NASA Astrophysics Data System (ADS)

    Wilson, Joseph N.; Chen, LiangMing

    1999-10-01

    Various researchers have realized the value of implementing loop fusion to evaluate dense (pointwise) array expressions. Recently, the method of template metaprogramming in C++ has been used to significantly speed-up the evaluation of array expressions, allowing C++ programs to achieve performance comparable to or better than FORTRAN for numerical analysis applications. Unfortunately, the template metaprogramming technique suffers from several limitations in applicability, portability, and potential performance. We present a framework for evaluating dense array expressions in object-oriented programming languages. We demonstrate how this technique supports both common subexpression elimination and threaded implementation and compare its performance to object-library and hand-generated code.

  2. Code Optimization and Parallelization on the Origins: Looking from Users' Perspective

    NASA Technical Reports Server (NTRS)

    Chang, Yan-Tyng Sherry; Thigpen, William W. (Technical Monitor)

    2002-01-01

    Parallel machines are becoming the main compute engines for high performance computing. Despite their increasing popularity, it is still a challenge for most users to learn the basic techniques to optimize/parallelize their codes on such platforms. In this paper, we present some experiences on learning these techniques for the Origin systems at the NASA Advanced Supercomputing Division. Emphasis of this paper will be on a few essential issues (with examples) that general users should master when they work with the Origins as well as other parallel systems.

  3. Optimizing Network-Coded Cooperative Communications via Joint Session Grouping and Relay Node Selection

    DTIC Science & Technology

    2011-01-01

    Optimizing Network-Coded Cooperative Communications via Joint Session Grouping and Relay Node Selection Sushant Sharma Yi Shi Y. Thomas Hou Hanif...that for a single relay node, we can group as many sessions as we want. But, in a recent study [20], Sharma et al. showed that there exists a so-called...destination wireless network. In [20], Sharma et al. considered NC-CC with only one relay node. Their analysis showed that NC is not always good for CC, and

  4. Optimized clinical performance of growth hormone with an expanded genetic code.

    PubMed

    Cho, Ho; Daniel, Tom; Buechler, Ying Ji; Litzinger, David C; Maio, Zhenwei; Putnam, Anna-Maria Hays; Kraynov, Vadim S; Sim, Bee-Cheng; Bussell, Stuart; Javahishvili, Tsotne; Kaphle, Sami; Viramontes, Guillermo; Ong, Mike; Chu, Stephanie; Becky, G C; Lieu, Ricky; Knudsen, Nick; Castiglioni, Paola; Norman, Thea C; Axelrod, Douglas W; Hoffman, Andrew R; Schultz, Peter G; DiMarchi, Richard D; Kimmel, Bruce E

    2011-05-31

    The ribosomal incorporation of nonnative amino acids into polypeptides in living cells provides the opportunity to endow therapeutic proteins with unique pharmacological properties. We report here the first clinical study of a biosynthetic protein produced using an expanded genetic code. Incorporation of p-acetylphenylalanine (pAcF) at distinct locations in human growth hormone (hGH) allowed site-specific conjugation with polyethylene glycol (PEG) to produce homogeneous hGH variants. A mono-PEGylated mutant hGH modified at residue 35 demonstrated favorable pharmacodynamic properties in GH-deficient rats. Clinical studies in GH-deficient adults demonstrated efficacy and safety comparable to native human growth hormone therapy but with increased potency and reduced injection frequency. This example illustrates the utility of nonnative amino acids to optimize protein therapeutics in an analogous fashion to the use of medicinal chemistry to optimize conventional natural products, low molecular weight drugs, and peptides.

  5. Fast simulated annealing and adaptive Monte Carlo sampling based parameter optimization for dense optical-flow deformable image registration of 4DCT lung anatomy

    NASA Astrophysics Data System (ADS)

    Dou, Tai H.; Min, Yugang; Neylon, John; Thomas, David; Kupelian, Patrick; Santhanam, Anand P.

    2016-03-01

    Deformable image registration (DIR) is an important step in radiotherapy treatment planning. An optimal input registration parameter set is critical to achieve the best registration performance with the specific algorithm. Methods In this paper, we investigated a parameter optimization strategy for Optical-flow based DIR of the 4DCT lung anatomy. A novel fast simulated annealing with adaptive Monte Carlo sampling algorithm (FSA-AMC) was investigated for solving the complex non-convex parameter optimization problem. The metric for registration error for a given parameter set was computed using landmark-based mean target registration error (mTRE) between a given volumetric image pair. To reduce the computational time in the parameter optimization process, a GPU based 3D dense optical-flow algorithm was employed for registering the lung volumes. Numerical analyses on the parameter optimization for the DIR were performed using 4DCT datasets generated with breathing motion models and open-source 4DCT datasets. Results showed that the proposed method efficiently estimated the optimum parameters for optical-flow and closely matched the best registration parameters obtained using an exhaustive parameter search method.

  6. Optimal configuration of respiratory navigator gating for the quantification of left ventricular strain using spiral cine displacement encoding with stimulated echoes (DENSE) MRI.

    PubMed

    Hamlet, Sean M; Haggerty, Christopher M; Suever, Jonathan D; Wehner, Gregory J; Andres, Kristin N; Powell, David K; Zhong, Xiaodong; Fornwalt, Brandon K

    2017-03-01

    To determine the optimal respiratory navigator gating configuration for the quantification of left ventricular strain using spiral cine displacement encoding with stimulated echoes (DENSE) MRI. Two-dimensional spiral cine DENSE was performed on a 3 Tesla MRI using two single-navigator configurations (retrospective, prospective) and a combined "dual-navigator" configuration in 10 healthy adults and 20 healthy children. The adults also underwent breathhold DENSE as a reference standard for comparisons. Peak left ventricular strains, signal-to-noise ratio (SNR), and navigator efficiency were compared. Subjects also underwent dual-navigator gating with and without visual feedback to determine the effect on navigator efficiency. There were no differences in circumferential, radial, and longitudinal strains between navigator-gated and breathhold DENSE (P = 0.09-0.95) (as confidence intervals, retrospective: [-1.0%-1.1%], [-7.4%-2.0%], [-1.0%-1.2%]; prospective: [-0.6%-2.7%], [-2.8%-8.3%], [-0.3%-2.9%]; dual: [-1.6%-0.5%], [-8.3%-3.2%], [-0.8%-1.9%], respectively). The dual configuration maintained SNR compared with breathhold acquisitions (16 versus 18, P = 0.06). SNR for the prospective configuration was lower than for the dual navigator in adults (P = 0.004) and children (P < 0.001). Navigator efficiency was higher (P < 0.001) for both retrospective (54%) and prospective (56%) configurations compared with the dual configuration (35%). Visual feedback improved the dual configuration navigator efficiency to 55% (P < 0.001). When quantifying left ventricular strains using spiral cine DENSE MRI, a dual navigator configuration results in the highest SNR in adults and children. In adults, a retrospective configuration has good navigator efficiency without a substantial drop in SNR. Prospective gating should be avoided because it has the lowest SNR. Visual feedback represents an effective option to maintain navigator efficiency while using a dual

  7. Optimization and Openmp Parallelization of a Discrete Element Code for Convex Polyhedra on Multi-Core Machines

    NASA Astrophysics Data System (ADS)

    Chen, Jian; Matuttis, Hans-Georg

    2013-02-01

    We report our experiences with the optimization and parallelization of a discrete element code for convex polyhedra on multi-core machines and introduce a novel variant of the sort-and-sweep neighborhood algorithm. While in theory the whole code in itself parallelizes ideally, in practice the results on different architectures with different compilers and performance measurement tools depend very much on the particle number and optimization of the code. After difficulties with the interpretation of the data for speedup and efficiency are overcome, respectable parallelization speedups could be obtained.

  8. Acceleration of the Geostatistical Software Library (GSLIB) by code optimization and hybrid parallel programming

    NASA Astrophysics Data System (ADS)

    Peredo, Oscar; Ortiz, Julián M.; Herrero, José R.

    2015-12-01

    The Geostatistical Software Library (GSLIB) has been used in the geostatistical community for more than thirty years. It was designed as a bundle of sequential Fortran codes, and today it is still in use by many practitioners and researchers. Despite its widespread use, few attempts have been reported in order to bring this package to the multi-core era. Using all CPU resources, GSLIB algorithms can handle large datasets and grids, where tasks are compute- and memory-intensive applications. In this work, a methodology is presented to accelerate GSLIB applications using code optimization and hybrid parallel processing, specifically for compute-intensive applications. Minimal code modifications are added decreasing as much as possible the elapsed time of execution of the studied routines. If multi-core processing is available, the user can activate OpenMP directives to speed up the execution using all resources of the CPU. If multi-node processing is available, the execution is enhanced using MPI messages between the compute nodes.Four case studies are presented: experimental variogram calculation, kriging estimation, sequential gaussian and indicator simulation. For each application, three scenarios (small, large and extra large) are tested using a desktop environment with 4 CPU-cores and a multi-node server with 128 CPU-nodes. Elapsed times, speedup and efficiency results are shown.

  9. Optimized conical shaped charge design using the SCAP (Shaped Charge Analysis Program) code

    SciTech Connect

    Vigil, M.G.

    1988-09-01

    The Shaped Charge Analysis Program (SCAP) is used to analytically model and optimize the design of Conical Shaped Charges (CSC). A variety of existing CSCs are initially modeled with the SCAP code and the predicted jet tip velocities, jet penetrations, and optimum standoffs are compared to previously published experimental results. The CSCs vary in size from 0.69 inch (1.75 cm) to 9.125 inch (23.18 cm) conical liner inside diameter. Two liner materials (copper and steel) and several explosives (Octol, Comp B, PBX-9501) are included in the CSCs modeled. The target material was mild steel. A parametric study was conducted using the SCAP code to obtain the optimum design for a 3.86 inch (9.8 cm) CSC. The variables optimized in this study included the CSC apex angle, conical liner thickness, explosive height, optimum standoff, tamper/confinement thickness, and explosive width. The non-dimensionalized jet penetration to diameter ratio versus the above parameters are graphically presented. 12 refs., 10 figs., 7 tabs.

  10. Bayesian optimization of modeled CO2 fluxes in Oregon using a dense tower network, aircraft campaigns, and the community land model 4.5

    NASA Astrophysics Data System (ADS)

    Schmidt, A.; Conley, S. A.; Goeckede, M.; Andrews, A. E.; Masarie, K. A.; Sweeney, C.

    2015-12-01

    Modeled estimates of net ecosystem exchange (NEE) calculated with CLM4.5 at 4 km horizontal resolution were optimized using a classical Bayesian inversion approach with atmospheric mixing ratio observations from a dense tower network in Oregon. We optimized NEE in monthly batches for the years 2012 through 2014, and determined the associated reduction in flux uncertainties broken up by sub-domains. The WRF-STILT transport model was deployed to link modelled fluxes of CO2 to the concentrations from 5 high precision and accuracy CO2 observation towers equipped with CRDS analyzers. To find the best compromise between aggregation errors and the degrees of freedom in the system, we developed an approach for the spatial structuring of our domain that was informed by an unsupervised clustering approach based on flux values of the prior state vector and information about the land surface, soil, and vegetation distribution that was used in the model. To assess the uncertainty of the transport modeling component within our inverse optimization framework we used the data of 7 airborne measurement campaigns over the Oregon domain during the study period providing detailed information about the errors in the model boundary-layer height and wind field of the transport model. The optimized model was then used to estimate future CO2 budgets for Oregon, including potential effects of LULC changes from conventional agriculture towards energy crops.

  11. Dense Plasma Focus Modeling

    SciTech Connect

    Li, Hui; Li, Shengtai; Jungman, Gerard; Hayes-Sterbenz, Anna Catherine

    2016-08-31

    The mechanisms for pinch formation in Dense Plasma Focus (DPF) devices, with the generation of high-energy ions beams and subsequent neutron production over a relatively short distance, are not fully understood. Here we report on high-fidelity 2D and 3D numerical magnetohydrodynamic (MHD) simulations using the LA-COMPASS code to study the pinch formation dynamics and its associated instabilities and neutron production.

  12. Compiler blockability of dense matrix factorizations.

    SciTech Connect

    Carr, S.; Lehoucq, R. B.; Mathematics and Computer Science; Michigan Technological Univ.

    1997-09-01

    The goal of the LAPACK project is to provide efficient and portable software for dense numerical linear algebra computations. By recasting many of the fundamental dense matrix computations in terms of calls to an efficient implementation of the BLAS (Basic Linear Algebra Subprograms), the LAPACK project has, in large part, achieved its goal. Unfortunately, the efficient implementation of the BLAS results often in machine-specific code that is not portable across multiple architectures without a significant loss in performance or a significant effort to reoptimize them. This article examines whether most of the hand optimizations performed on matrix factorization codes are unnecessary because they can (and should) be performed by the compiler. We believe that it is better for the programmer to express algorithms in a machine-independent form and allow the compiler to handle the machine-dependent details. This gives the algorithms portability across architectures and removes the error-prone, expensive and tedious process of hand optimization. Although there currently exist no production compilers that can perform all the loop transformations discussed in this article, a description of current research in compiler technology is provided that will prove beneficial to the numerical linear algebra community. We show that the Cholesky and optimized automatically by a compiler to be as efficient as the same hand-optimized version found in LAPACK. We also show that the QR factorization may be optimized by the compiler to perform comparably with the hand-optimized LAPACK version on modest matrix sizes. Our approach allows us to conclude that with the advent of the compiler optimizations discussed in this article, matrix factorizations may be efficiently implemented in a BLAS-less form.

  13. Optimization of In-Situ Shot-Peening-Assisted Cold Spraying Parameters for Full Corrosion Protection of Mg Alloy by Fully Dense Al-Based Alloy Coating

    NASA Astrophysics Data System (ADS)

    Wei, Ying-Kang; Luo, Xiao-Tao; Li, Cheng-Xin; Li, Chang-Jiu

    2017-01-01

    Magnesium-based alloys have excellent physical and mechanical properties for a lot of applications. However, due to high chemical reactivity, magnesium and its alloys are highly susceptible to corrosion. In this study, Al6061 coating was deposited on AZ31B magnesium by cold spray with a commercial Al6061 powder blended with large-sized stainless steel particles (in-situ shot-peening particles) using nitrogen gas. Microstructure and corrosion behavior of the sprayed coating was investigated as a function of shot-peening particle content in the feedstock. It is found that by introducing the in-situ tamping effect using shot-peening (SP) particles, the plastic deformation of deposited particles is significantly enhanced, thereby resulting in a fully dense Al6061 coating. SEM observations reveal that no SP particle is deposited into Al6061 coating at the optimization spraying parameters. Porosity of the coating significantly decreases from 10.7 to 0.4% as the SP particle content increases from 20 to 60 vol.%. The electrochemical corrosion experiments reveal that this novel in-situ SP-assisted cold spraying is effective to deposit fully dense Al6061 coating through which aqueous solution is not permeable and thus can provide exceptional protection of the magnesium-based materials from corrosion.

  14. Dense, shape‐optimized posterior 32‐channel coil for submillimeter functional imaging of visual cortex at 3T

    PubMed Central

    Grigorov, Filip; van der Kouwe, Andre J.; Wald, Lawrence L.; Keil, Boris

    2015-01-01

    Purpose Functional neuroimaging of small cortical patches such as columns is essential for testing computational models of vision, but imaging from cortical columns at conventional 3T fields is exceedingly difficult. By targeting the visual cortex exclusively, we tested whether combined optimization of shape, coil placement, and electronics would yield the necessary gains in signal‐to‐noise ratio (SNR) for submillimeter visual cortex functional MRI (fMRI). Method We optimized the shape of the housing to a population‐averaged atlas. The shape was comfortable without cushions and resulted in the maximally proximal placement of the coil elements. By using small wire loops with the least number of solder joints, we were able to maximize the Q factor of the individual elements. Finally, by planning the placement of the coils using the brain atlas, we were able to target the arrangement of the coil elements to the extent of the visual cortex. Results The combined optimizations led to as much as two‐fold SNR gain compared with a whole‐head 32‐channel coil. This gain was reflected in temporal SNR as well and enabled fMRI mapping at 0.75 mm resolutions using a conventional GRAPPA‐accelerated gradient echo echo planar imaging. Conclusion Integrated optimization of shape, electronics, and element placement can lead to large gains in SNR and empower submillimeter fMRI at 3T. Magn Reson Med 76:321–328, 2016. © 2015 Wiley Periodicals, Inc. PMID:26218835

  15. Scalable coding of depth maps with R-D optimized embedding.

    PubMed

    Mathew, Reji; Taubman, David; Zanuttigh, Pietro

    2013-05-01

    Recent work on depth map compression has revealed the importance of incorporating a description of discontinuity boundary geometry into the compression scheme. We propose a novel compression strategy for depth maps that incorporates geometry information while achieving the goals of scalability and embedded representation. Our scheme involves two separate image pyramid structures, one for breakpoints and the other for sub-band samples produced by a breakpoint-adaptive transform. Breakpoints capture geometric attributes, and are amenable to scalable coding. We develop a rate-distortion optimization framework for determining the presence and precision of breakpoints in the pyramid representation. We employ a variation of the EBCOT scheme to produce embedded bit-streams for both the breakpoint and sub-band data. Compared to JPEG 2000, our proposed scheme enables the same the scalability features while achieving substantially improved rate-distortion performance at the higher bit-rate range and comparable performance at the lower rates.

  16. Optimal design of FIR triplet halfband filter bank and application in image coding.

    PubMed

    Kha, H H; Tuan, H D; Nguyen, T Q

    2011-02-01

    This correspondence proposes an efficient semidefinite programming (SDP) method for the design of a class of linear phase finite impulse response triplet halfband filter banks whose filters have optimal frequency selectivity for a prescribed regularity order. The design problem is formulated as the minimization of the least square error subject to peak error constraints and regularity constraints. By using the linear matrix inequality characterization of the trigonometric semi-infinite constraints, it can then be exactly cast as a SDP problem with a small number of variables and, hence, can be solved efficiently. Several design examples of the triplet halfband filter bank are provided for illustration and comparison with previous works. Finally, the image coding performance of the filter bank is presented.

  17. Optimization and implementation of the integer wavelet transform for image coding.

    PubMed

    Grangetto, Marco; Magli, Enrico; Martina, Maurizio; Olmo, Gabriella

    2002-01-01

    This paper deals with the design and implementation of an image transform coding algorithm based on the integer wavelet transform (IWT). First of all, criteria are proposed for the selection of optimal factorizations of the wavelet filter polyphase matrix to be employed within the lifting scheme. The obtained results lead to the IWT implementations with very satisfactory lossless and lossy compression performance. Then, the effects of finite precision representation of the lifting coefficients on the compression performance are analyzed, showing that, in most cases, a very small number of bits can be employed for the mantissa keeping the performance degradation very limited. Stemming from these results, a VLSI architecture is proposed for the IWT implementation, capable of achieving very high frame rates with moderate gate complexity.

  18. Variational-average-atom-in-quantum-plasmas (VAAQP) code and virial theorem: Equation-of-state and shock-Hugoniot calculations for warm dense Al, Fe, Cu, and Pb

    NASA Astrophysics Data System (ADS)

    Piron, R.; Blenski, T.

    2011-02-01

    The numerical code VAAQP (variational average atom in quantum plasmas), which is based on a fully variational model of equilibrium dense plasmas, is applied to equation-of-state calculations for aluminum, iron, copper, and lead in the warm-dense-matter regime. VAAQP does not impose the neutrality of the Wigner-Seitz ion sphere; it provides the average-atom structure and the mean ionization self-consistently from the solution of the variational equations. The formula used for the electronic pressure is simple and does not require any numerical differentiation. In this paper, the virial theorem is derived in both nonrelativistic and relativistic versions of the model. This theorem allows one to express the electron pressure as a combination of the electron kinetic and interaction energies. It is shown that the model fulfills automatically the virial theorem in the case of local-density approximations to the exchange-correlation free-energy. Applications of the model to the equation-of-state and Hugoniot shock adiabat of aluminum, iron, copper, and lead in the warm-dense-matter regime are presented. Comparisons with other approaches, including the inferno model, and with available experimental data are given. This work allows one to understand the thermodynamic consistency issues in the existing average-atom models. Starting from the case of aluminum, a comparative study of the thermodynamic consistency of the models is proposed. A preliminary study of the validity domain of the inferno model is also included.

  19. Variational-average-atom-in-quantum-plasmas (VAAQP) code and virial theorem: equation-of-state and shock-Hugoniot calculations for warm dense Al, Fe, Cu, and Pb.

    PubMed

    Piron, R; Blenski, T

    2011-02-01

    The numerical code VAAQP (variational average atom in quantum plasmas), which is based on a fully variational model of equilibrium dense plasmas, is applied to equation-of-state calculations for aluminum, iron, copper, and lead in the warm-dense-matter regime. VAAQP does not impose the neutrality of the Wigner-Seitz ion sphere; it provides the average-atom structure and the mean ionization self-consistently from the solution of the variational equations. The formula used for the electronic pressure is simple and does not require any numerical differentiation. In this paper, the virial theorem is derived in both nonrelativistic and relativistic versions of the model. This theorem allows one to express the electron pressure as a combination of the electron kinetic and interaction energies. It is shown that the model fulfills automatically the virial theorem in the case of local-density approximations to the exchange-correlation free-energy. Applications of the model to the equation-of-state and Hugoniot shock adiabat of aluminum, iron, copper, and lead in the warm-dense-matter regime are presented. Comparisons with other approaches, including the inferno model, and with available experimental data are given. This work allows one to understand the thermodynamic consistency issues in the existing average-atom models. Starting from the case of aluminum, a comparative study of the thermodynamic consistency of the models is proposed. A preliminary study of the validity domain of the inferno model is also included. ©2011 American Physical Society

  20. Dense, shape-optimized posterior 32-channel coil for submillimeter functional imaging of visual cortex at 3T.

    PubMed

    Farivar, Reza; Grigorov, Filip; van der Kouwe, Andre J; Wald, Lawrence L; Keil, Boris

    2016-07-01

    Functional neuroimaging of small cortical patches such as columns is essential for testing computational models of vision, but imaging from cortical columns at conventional 3T fields is exceedingly difficult. By targeting the visual cortex exclusively, we tested whether combined optimization of shape, coil placement, and electronics would yield the necessary gains in signal-to-noise ratio (SNR) for submillimeter visual cortex functional MRI (fMRI). We optimized the shape of the housing to a population-averaged atlas. The shape was comfortable without cushions and resulted in the maximally proximal placement of the coil elements. By using small wire loops with the least number of solder joints, we were able to maximize the Q factor of the individual elements. Finally, by planning the placement of the coils using the brain atlas, we were able to target the arrangement of the coil elements to the extent of the visual cortex. The combined optimizations led to as much as two-fold SNR gain compared with a whole-head 32-channel coil. This gain was reflected in temporal SNR as well and enabled fMRI mapping at 0.75 mm resolutions using a conventional GRAPPA-accelerated gradient echo echo planar imaging. Integrated optimization of shape, electronics, and element placement can lead to large gains in SNR and empower submillimeter fMRI at 3T. Magn Reson Med 76:321-328, 2016. © 2015 Wiley Periodicals, Inc. © 2015 The Authors. Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine.

  1. Variation in coding influence across the USA. Risk and reward in reimbursement optimization.

    PubMed

    Lorence, Daniel P; Richards, Michael

    2002-01-01

    Recent anti-fraud enforcement policies across the US health-care system have led to widespread speculation about the effectiveness of increased penalties for overcharging practices adopted by health-care service organizations. Severe penalties, including imprisonment, suggest that fraudulent billing, and related misclassification of services provided to patients, would be greatly reduced or eliminated as a result of increased government investigation and reprisal. This study sought to measure the extent to which health information managers reported being influenced by superiors to manipulate coding and classification of patient data. Findings from a nationwide survey of managers suggest that such practices are still pervasive, despite recent counter-fraud legislation and highly visible prosecution of fraudulent behaviors. Examining variation in influences exerted from both within and external to specific service delivery settings, results suggest that pressure to alter classification codes occurred both within and external to the provider setting. We also examine how optimization influences vary across demographic, practice setting, and market characteristics, and find significant variation in influence across practice settings and market types. Implications for reimbursement programs and evidence-based health care are discussed.

  2. Optimization of Parallel Legendre Transform using Graphics Processing Unit (GPU) for a Geodynamo Code

    NASA Astrophysics Data System (ADS)

    Lokavarapu, H. V.; Matsui, H.

    2015-12-01

    Convection and magnetic field of the Earth's outer core are expected to have vast length scales. To resolve these flows, high performance computing is required for geodynamo simulations using spherical harmonics transform (SHT), a significant portion of the execution time is spent on the Legendre transform. Calypso is a geodynamo code designed to model magnetohydrodynamics of a Boussinesq fluid in a rotating spherical shell, such as the outer core of the Earth. The code has been shown to scale well on computer clusters capable of computing at the order of 10⁵ cores using Message Passing Interface (MPI) and Open Multi-Processing (OpenMP) parallelization for CPUs. To further optimize, we investigate three different algorithms of the SHT using GPUs. One is to preemptively compute the Legendre polynomials on the CPU before executing SHT on the GPU within the time integration loop. In the second approach, both the Legendre polynomials and the SHT are computed on the GPU simultaneously. In the third approach , we initially partition the radial grid for the forward transform and the harmonic order for the backward transform between the CPU and GPU. There after, the partitioned works are simultaneously computed in the time integration loop. We examine the trade-offs between space and time, memory bandwidth and GPU computations on Maverick, a Texas Advanced Computing Center (TACC) supercomputer. We have observed improved performance using a GPU enabled Legendre transform. Furthermore, we will compare and contrast the different algorithms in the context of GPUs.

  3. Design of coded aperture arrays by means of a global optimization algorithm

    NASA Astrophysics Data System (ADS)

    Lang, Haitao; Liu, Liren; Yang, Qingguo

    2006-08-01

    Coded aperture imaging (CAI) has evolved as a standard technique for imaging high energy photon sources and has found numerous applications. Coded aperture arrays (CAAs) are the most important devices in the applications of CAI. In recent years, many approaches were presented to design optimum or near-optimum CAAs. Uniformly redundant arrays (URAs) are the most successful CAAs for their cyclic autocorrelation consisting of a sequence of delta functions on a flat sidelobe which can easily be subtracted when the object has been reconstructed. Unfortunately, the existing methods can only be used to design URAs with limited number of array sizes and fixed autocorrelative sidelobe-to-peak ratio. In this paper, we presented a method to design more flexible URAs by means of a global optimization algorithm named DIRECT. By our approaches, we obtain various types of URAs including the filled URAs which can be constructed by existing methods and the sparse URAs which never be constructed and mentioned by existing papers as far as we know.

  4. The DOPEX code: An application of the method of steepest descent to laminated-shield-weight optimization with several constraints

    NASA Technical Reports Server (NTRS)

    Lahti, G. P.

    1972-01-01

    A two- or three-constraint, two-dimensional radiation shield weight optimization procedure and a computer program, DOPEX, is described. The DOPEX code uses the steepest descent method to alter a set of initial (input) thicknesses for a shield configuration to achieve a minimum weight while simultaneously satisfying dose constaints. The code assumes an exponential dose-shield thickness relation with parameters specified by the user. The code also assumes that dose rates in each principal direction are dependent only on thicknesses in that direction. Code input instructions, FORTRAN 4 listing, and a sample problem are given. Typical computer time required to optimize a seven-layer shield is about 0.1 minute on an IBM 7094-2.

  5. Analytical computation of the derivative of PSF for the optimization of phase mask in wavefront coding system.

    PubMed

    Chen, Xinhua; Zhou, Jiankang; Shen, Weimin

    2016-09-05

    Wavefront coding system can realize defocus invariance of PSF/OTF with a phase mask inserting in the pupil plane. Ideally, the derivative of the PSF/OTF with respect to defocus error should be close to zero as much as possible over the extended depth of field/focus for the wavefront coding system. In this paper, we propose an analytical expression for the computation of the derivative of PSF. With this expression, the derivative of PSF based merit function can be used in the optimization of the wavefront coding system with any type of phase mask and aberrations. Computation of the derivative of PSF using the proposed expression and FFT respectively are compared and discussed. We also demonstrate the optimization of a generic polynomial phase mask in wavefront coding system as an example.

  6. Stereoscopic Visual Attention-Based Regional Bit Allocation Optimization for Multiview Video Coding

    NASA Astrophysics Data System (ADS)

    Zhang, Yun; Jiang, Gangyi; Yu, Mei; Chen, Ken; Dai, Qionghai

    2010-12-01

    We propose a Stereoscopic Visual Attention- (SVA-) based regional bit allocation optimization for Multiview Video Coding (MVC) by the exploiting visual redundancies from human perceptions. We propose a novel SVA model, where multiple perceptual stimuli including depth, motion, intensity, color, and orientation contrast are utilized, to simulate the visual attention mechanisms of human visual system with stereoscopic perception. Then, a semantic region-of-interest (ROI) is extracted based on the saliency maps of SVA. Both objective and subjective evaluations of extracted ROIs indicated that the proposed SVA model based on ROI extraction scheme outperforms the schemes only using spatial or/and temporal visual attention clues. Finally, by using the extracted SVA-based ROIs, a regional bit allocation optimization scheme is presented to allocate more bits on SVA-based ROIs for high image quality and fewer bits on background regions for efficient compression purpose. Experimental results on MVC show that the proposed regional bit allocation algorithm can achieve over [InlineEquation not available: see fulltext.]% bit-rate saving while maintaining the subjective image quality. Meanwhile, the image quality of ROIs is improved by [InlineEquation not available: see fulltext.] dB at the cost of insensitive image quality degradation of the background image.

  7. Multiplex iterative plasmid engineering for combinatorial optimization of metabolic pathways and diversification of protein coding sequences.

    PubMed

    Li, Yifan; Gu, Qun; Lin, Zhenquan; Wang, Zhiwen; Chen, Tao; Zhao, Xueming

    2013-11-15

    Engineering complex biological systems typically requires combinatorial optimization to achieve the desired functionality. Here, we present Multiplex Iterative Plasmid Engineering (MIPE), which is a highly efficient and customized method for combinatorial diversification of plasmid sequences. MIPE exploits ssDNA mediated λ Red recombineering for the introduction of mutations, allowing it to target several sites simultaneously and generate libraries of up to 10(7) sequences in one reaction. We also describe "restriction digestion mediated co-selection (RD CoS)", which enables MIPE to produce enhanced recombineering efficiencies with greatly simplified coselection procedures. To demonstrate this approach, we applied MIPE to fine-tune gene expression level in the 5-gene riboflavin biosynthetic pathway and successfully isolated a clone with 2.67-fold improved production in less than a week. We further demonstrated the ability of MIPE for highly multiplexed diversification of protein coding sequence by simultaneously targeting 23 codons scattered along the 750 bp sequence. We anticipate this method to benefit the optimization of diverse biological systems in synthetic biology and metabolic engineering.

  8. Selecting a proper design period for heliostat field layout optimization using Campo code

    NASA Astrophysics Data System (ADS)

    Saghafifar, Mohammad; Gadalla, Mohamed

    2016-09-01

    In this paper, different approaches are considered to calculate the cosine factor which is utilized in Campo code to expand the heliostat field layout and maximize its annual thermal output. Furthermore, three heliostat fields containing different number of mirrors are taken into consideration. Cosine factor is determined by considering instantaneous and time-average approaches. For instantaneous method, different design days and design hours are selected. For the time average method, daily time average, monthly time average, seasonally time average, and yearly time averaged cosine factor determinations are considered. Results indicate that instantaneous methods are more appropriate for small scale heliostat field optimization. Consequently, it is proposed to consider the design period as the second design variable to ensure the best outcome. For medium and large scale heliostat fields, selecting an appropriate design period is more important. Therefore, it is more reliable to select one of the recommended time average methods to optimize the field layout. Optimum annual weighted efficiency for heliostat fields (small, medium, and large) containing 350, 1460, and 3450 mirrors are 66.14%, 60.87%, and 54.04%, respectively.

  9. Efficient Coding and Statistically Optimal Weighting of Covariance among Acoustic Attributes in Novel Sounds

    PubMed Central

    Stilp, Christian E.; Kluender, Keith R.

    2012-01-01

    To the extent that sensorineural systems are efficient, redundancy should be extracted to optimize transmission of information, but perceptual evidence for this has been limited. Stilp and colleagues recently reported efficient coding of robust correlation (r = .97) among complex acoustic attributes (attack/decay, spectral shape) in novel sounds. Discrimination of sounds orthogonal to the correlation was initially inferior but later comparable to that of sounds obeying the correlation. These effects were attenuated for less-correlated stimuli (r = .54) for reasons that are unclear. Here, statistical properties of correlation among acoustic attributes essential for perceptual organization are investigated. Overall, simple strength of the principal correlation is inadequate to predict listener performance. Initial superiority of discrimination for statistically consistent sound pairs was relatively insensitive to decreased physical acoustic/psychoacoustic range of evidence supporting the correlation, and to more frequent presentations of the same orthogonal test pairs. However, increased range supporting an orthogonal dimension has substantial effects upon perceptual organization. Connectionist simulations and Eigenvalues from closed-form calculations of principal components analysis (PCA) reveal that perceptual organization is near-optimally weighted to shared versus unshared covariance in experienced sound distributions. Implications of reduced perceptual dimensionality for speech perception and plausible neural substrates are discussed. PMID:22292057

  10. Bio-optimized energy transfer in densely packed fluorescent protein enables near-maximal luminescence and solid-state lasers

    NASA Astrophysics Data System (ADS)

    Gather, Malte C.; Yun, Seok Hyun

    2014-12-01

    Bioluminescent organisms are likely to have an evolutionary drive towards high radiance. As such, bio-optimized materials derived from them hold great promise for photonic applications. Here, we show that biologically produced fluorescent proteins retain their high brightness even at the maximum density in solid state through a special molecular structure that provides optimal balance between high protein concentration and low resonance energy transfer self-quenching. Dried films of green fluorescent protein show low fluorescence quenching (-7 dB) and support strong optical amplification (gnet=22 cm-1 96 dB cm-1). Using these properties, we demonstrate vertical cavity surface emitting micro-lasers with low threshold (<100 pJ, outperforming organic semiconductor lasers) and self-assembled all-protein ring lasers. Moreover, solid-state blends of different proteins support efficient Förster resonance energy transfer, with sensitivity to intermolecular distance thus allowing all-optical sensing. The design of fluorescent proteins may be exploited for bio-inspired solid-state luminescent molecules or nanoparticles.

  11. Bio-optimized energy transfer in densely packed fluorescent protein enables near-maximal luminescence and solid-state lasers.

    PubMed

    Gather, Malte C; Yun, Seok Hyun

    2014-12-08

    Bioluminescent organisms are likely to have an evolutionary drive towards high radiance. As such, bio-optimized materials derived from them hold great promise for photonic applications. Here, we show that biologically produced fluorescent proteins retain their high brightness even at the maximum density in solid state through a special molecular structure that provides optimal balance between high protein concentration and low resonance energy transfer self-quenching. Dried films of green fluorescent protein show low fluorescence quenching (-7 dB) and support strong optical amplification (gnet=22 cm(-1); 96 dB cm(-1)). Using these properties, we demonstrate vertical cavity surface emitting micro-lasers with low threshold (<100 pJ, outperforming organic semiconductor lasers) and self-assembled all-protein ring lasers. Moreover, solid-state blends of different proteins support efficient Förster resonance energy transfer, with sensitivity to intermolecular distance thus allowing all-optical sensing. The design of fluorescent proteins may be exploited for bio-inspired solid-state luminescent molecules or nanoparticles.

  12. Bio-optimized energy transfer in densely packed fluorescent protein enables near-maximal luminescence and solid-state lasers

    PubMed Central

    Gather, Malte C.; Yun, Seok Hyun

    2015-01-01

    Bioluminescent organisms are likely to have an evolutionary drive towards high radiance. As such, bio-optimized materials derived from them hold great promise for photonic applications. Here we show that biologically produced fluorescent proteins retain their high brightness even at the maximum density in solid state through a special molecular structure that provides optimal balance between high protein concentration and low resonance energy transfer self-quenching. Dried films of green fluorescent protein show low fluorescence quenching (−7 dB) and support strong optical amplification (gnet = 22 cm−1; 96 dB cm−1). Using these properties, we demonstrate vertical cavity surface emitting micro-lasers with low threshold (<100 pJ, outperforming organic semiconductor lasers) and self-assembled all-protein ring lasers. Moreover, solid-state blends of different proteins support efficient Förster resonance energy transfer, with sensitivity to intermolecular distance thus allowing all-optical sensing. The design of fluorescent proteins may be exploited for bio-inspired solid-state luminescent molecules or nanoparticles. PMID:25483850

  13. Comprehensive evaluation of multi-satellite precipitation products with a dense rain gauge network and optimally merging their simulated hydrological flows using the Bayesian model averaging method

    NASA Astrophysics Data System (ADS)

    Jiang, Shanhu; Ren, Liliang; Hong, Yang; Yong, Bin; Yang, Xiaoli; Yuan, Fei; Ma, Mingwei

    2012-07-01

    SummaryThis study first focuses on comprehensive evaluating three widely used satellite precipitation products (TMPA 3B42V6, TMPA 3B42RT, and CMORPH) with a dense rain gauge network in the Mishui basin (9972 km2) in South China and then optimally merge their simulated hydrologic flows with the semi-distributed Xinanjiang model using the Bayesian model averaging method. The initial satellite precipitation data comparisons show that the reanalyzed 3B42V6, with a bias of -4.54%, matched best with the rain gauge observations, while the two near real-time satellite datasets (3B42RT and CMORPH) largely underestimated precipitation by 42.72% and 40.81% respectively. With the model parameters first benchmarked by the rain gauge data, the behavior of the streamflow simulation from the 3B42V6 was also the most optimal amongst the three products, while the two near real-time satellite datasets produced deteriorated biases and Nash-Sutcliffe coefficients (NSCEs). Still, when the model parameters were recalibrated by each individual satellite data, the performance of the streamflow simulations from the two near real-time satellite products were significantly improved, thus demonstrating the need for specific calibrations of the hydrological models for the near real-time satellite inputs. Moreover, when optimally merged with respect to the streamflows forced by the two near real-time satellite precipitation products and all the three satellite precipitation products using the Bayesian model averaging method, the resulted streamflow series further improved and became more robust. In summary, the three current state-of-the-art satellite precipitation products have demonstrated potential in hydrological research and applications. The benchmarking, recalibration, and optimal merging schemes for streamflow simulation at a basin scale described in the present work will hopefully be a reference for future utilizations of satellite precipitation products in global and regional

  14. An Optimal Pull-Push Scheduling Algorithm Based on Network Coding for Mesh Peer-to-Peer Live Streaming

    NASA Astrophysics Data System (ADS)

    Cui, Laizhong; Jiang, Yong; Wu, Jianping; Xia, Shutao

    Most large-scale Peer-to-Peer (P2P) live streaming systems are constructed as a mesh structure, which can provide robustness in the dynamic P2P environment. The pull scheduling algorithm is widely used in this mesh structure, which degrades the performance of the entire system. Recently, network coding was introduced in mesh P2P streaming systems to improve the performance, which makes the push strategy feasible. One of the most famous scheduling algorithms based on network coding is R2, with a random push strategy. Although R2 has achieved some success, the push scheduling strategy still lacks a theoretical model and optimal solution. In this paper, we propose a novel optimal pull-push scheduling algorithm based on network coding, which consists of two stages: the initial pull stage and the push stage. The main contributions of this paper are: 1) we put forward a theoretical analysis model that considers the scarcity and timeliness of segments; 2) we formulate the push scheduling problem to be a global optimization problem and decompose it into local optimization problems on individual peers; 3) we introduce some rules to transform the local optimization problem into a classical min-cost optimization problem for solving it; 4) We combine the pull strategy with the push strategy and systematically realize our scheduling algorithm. Simulation results demonstrate that decode delay, decode ratio and redundant fraction of the P2P streaming system with our algorithm can be significantly improved, without losing throughput and increasing overhead.

  15. The role of crossover operator in evolutionary-based approach to the problem of genetic code optimization.

    PubMed

    Błażej, Paweł; Wnȩtrzak, Małgorzata; Mackiewicz, Paweł

    2016-12-01

    One of theories explaining the present structure of canonical genetic code assumes that it was optimized to minimize harmful effects of amino acid replacements resulting from nucleotide substitutions and translational errors. A way to testify this concept is to find the optimal code under given criteria and compare it with the canonical genetic code. Unfortunately, the huge number of possible alternatives makes it impossible to find the optimal code using exhaustive methods in sensible time. Therefore, heuristic methods should be applied to search the space of possible solutions. Evolutionary algorithms (EA) seem to be ones of such promising approaches. This class of methods is founded both on mutation and crossover operators, which are responsible for creating and maintaining the diversity of candidate solutions. These operators possess dissimilar characteristics and consequently play different roles in the process of finding the best solutions under given criteria. Therefore, the effective searching for the potential solutions can be improved by applying both of them, especially when these operators are devised specifically for a given problem. To study this subject, we analyze the effectiveness of algorithms for various combinations of mutation and crossover probabilities under three models of the genetic code assuming different restrictions on its structure. To achieve that, we adapt the position based crossover operator for the most restricted model and develop a new type of crossover operator for the more general models. The applied fitness function describes costs of amino acid replacement regarding their polarity. Our results indicate that the usage of crossover operators can significantly improve the quality of the solutions. Moreover, the simulations with the crossover operator optimize the fitness function in the smaller number of generations than simulations without this operator. The optimal genetic codes without restrictions on their structure

  16. Optimization of the Monte Carlo code for modeling of photon migration in tissue.

    PubMed

    Zołek, Norbert S; Liebert, Adam; Maniewski, Roman

    2006-10-01

    The Monte Carlo method is frequently used to simulate light transport in turbid media because of its simplicity and flexibility, allowing to analyze complicated geometrical structures. Monte Carlo simulations are, however, time consuming because of the necessity to track the paths of individual photons. The time consuming computation is mainly associated with the calculation of the logarithmic and trigonometric functions as well as the generation of pseudo-random numbers. In this paper, the Monte Carlo algorithm was developed and optimized, by approximation of the logarithmic and trigonometric functions. The approximations were based on polynomial and rational functions, and the errors of these approximations are less than 1% of the values of the original functions. The proposed algorithm was verified by simulations of the time-resolved reflectance at several source-detector separations. The results of the calculation using the approximated algorithm were compared with those of the Monte Carlo simulations obtained with an exact computation of the logarithm and trigonometric functions as well as with the solution of the diffusion equation. The errors of the moments of the simulated distributions of times of flight of photons (total number of photons, mean time of flight and variance) are less than 2% for a range of optical properties, typical of living tissues. The proposed approximated algorithm allows to speed up the Monte Carlo simulations by a factor of 4. The developed code can be used on parallel machines, allowing for further acceleration.

  17. Code to Optimize Load Sharing of Split-Torque Transmissions Applied to the Comanche Helicopter

    NASA Technical Reports Server (NTRS)

    1995-01-01

    Most helicopters now in service have a transmission with a planetary design. Studies have shown that some helicopters would be lighter and more reliable if they had a transmission with a split-torque design instead. However, a split-torque design has never been used by a U.S. helicopter manufacturer because there has been no proven method to ensure equal sharing of the load among the multiple load paths. The Sikorsky/Boeing team has chosen to use a split-torque transmission for the U.S. Army's Comanche helicopter, and Sikorsky Aircraft is designing and manufacturing the transmission. To help reduce the technical risk of fielding this helicopter, NASA and the Army have done the research jointly in cooperation with Sikorsky Aircraft. A theory was developed that equal load sharing could be achieved by proper configuration of the geartrain, and a computer code was completed in-house at the NASA Lewis Research Center to calculate this optimal configuration.

  18. Unbalanced Multiple-Description Video Coding with Rate-Distortion Optimization

    NASA Astrophysics Data System (ADS)

    Comas, David; Singh, Raghavendra; Ortega, Antonio; Marqués, Ferran

    2003-12-01

    We propose to use multiple-description coding (MDC) to protect video information against packet losses and delay, while also ensuring that it can be decoded using a standard decoder. Video data are encoded into a high-resolution stream using a standard compliant encoder. In addition, a low-resolution stream is generated by duplicating the relevant information (motion vectors, headers and some of the DCT coefficient) from the high-resolution stream while the remaining coefficients are set to zero. Both streams are independently decodable by a standard decoder. However, only in case of losses in the high resolution description, the corresponding information from the low resolution stream is decoded, else the received high resolution description is decoded. The main contribution of this paper is an optimization algorithm which, given the loss ratio, allocates bits to both descriptions and selects the right number of coefficients to duplicate in the low-resolution stream so as to minimize the expected distortion at the decoder end.

  19. SU-E-T-254: Optimization of GATE and PHITS Monte Carlo Code Parameters for Uniform Scanning Proton Beam Based On Simulation with FLUKA General-Purpose Code

    SciTech Connect

    Kurosu, K; Takashina, M; Koizumi, M; Das, I; Moskvin, V

    2014-06-01

    Purpose: Monte Carlo codes are becoming important tools for proton beam dosimetry. However, the relationships between the customizing parameters and percentage depth dose (PDD) of GATE and PHITS codes have not been reported which are studied for PDD and proton range compared to the FLUKA code and the experimental data. Methods: The beam delivery system of the Indiana University Health Proton Therapy Center was modeled for the uniform scanning beam in FLUKA and transferred identically into GATE and PHITS. This computational model was built from the blue print and validated with the commissioning data. Three parameters evaluated are the maximum step size, cut off energy and physical and transport model. The dependence of the PDDs on the customizing parameters was compared with the published results of previous studies. Results: The optimal parameters for the simulation of the whole beam delivery system were defined by referring to the calculation results obtained with each parameter. Although the PDDs from FLUKA and the experimental data show a good agreement, those of GATE and PHITS obtained with our optimal parameters show a minor discrepancy. The measured proton range R90 was 269.37 mm, compared to the calculated range of 269.63 mm, 268.96 mm, and 270.85 mm with FLUKA, GATE and PHITS, respectively. Conclusion: We evaluated the dependence of the results for PDDs obtained with GATE and PHITS Monte Carlo generalpurpose codes on the customizing parameters by using the whole computational model of the treatment nozzle. The optimal parameters for the simulation were then defined by referring to the calculation results. The physical model, particle transport mechanics and the different geometrybased descriptions need accurate customization in three simulation codes to agree with experimental data for artifact-free Monte Carlo simulation. This study was supported by Grants-in Aid for Cancer Research (H22-3rd Term Cancer Control-General-043) from the Ministry of Health

  20. On the optimality of the genetic code, with the consideration of coevolution theory by comparison of prominent cost measure matrices.

    PubMed

    Goodarzi, Hani; Najafabadi, Hamed Shateri; Hassani, Kasra; Nejad, Hamed Ahmadi; Torabi, Noorossadat

    2005-08-07

    Statistical and biochemical studies have revealed non-random patterns in codon assignments. The canonical genetic code is known to be highly efficient in minimizing the effects of mistranslation errors and point mutations, since it is known that when an amino acid is converted to another due to error, the biochemical properties of the resulted amino acid are usually very similar to those of the original one. In this study, using altered forms of the fitness functions used in the prior studies, we have optimized the parameters involved in the calculation of the error minimizing property of the genetic code so that the genetic code outscores the random codes as much as possible. This work also compares two prominent matrices, the Mutation Matrix and Point Accepted Mutations 74-100 (PAM(74-100)). It has been resulted that the hypothetical properties of the coevolution theory of the genetic code are already considered in PAM(74-100), giving more evidence on the existence of bias towards the genetic code in this matrix. Furthermore, our results indicate that PAM(74-100) is biased towards the single base mistranslation occurrences in second codon position as well as the frequency of amino acids. Thus PAM(74-100) is not a suitable substitution matrix for the studies conducted on the evolution of the genetic code.

  1. Reference values assessment in a Mediterranean population for small dense low-density lipoprotein concentration isolated by an optimized precipitation method

    PubMed Central

    Fernández-Cidón, Bárbara; Padró-Miquel, Ariadna; Alía-Ramos, Pedro; Castro-Castro, María José; Fanlo-Maresma, Marta; Dot-Bach, Dolors; Valero-Politi, José; Pintó-Sala, Xavier; Candás-Estébanez, Beatriz

    2017-01-01

    Background High serum concentrations of small dense low-density lipoprotein cholesterol (sd-LDL-c) particles are associated with risk of cardiovascular disease (CVD). Their clinical application has been hindered as a consequence of the laborious current method used for their quantification. Objective Optimize a simple and fast precipitation method to isolate sd-LDL particles and establish a reference interval in a Mediterranean population. Materials and methods Forty-five serum samples were collected, and sd-LDL particles were isolated using a modified heparin-Mg2+ precipitation method. sd-LDL-c concentration was calculated by subtracting high-density lipoprotein cholesterol (HDL-c) from the total cholesterol measured in the supernatant. This method was compared with the reference method (ultracentrifugation). Reference values were estimated according to the Clinical and Laboratory Standards Institute and The International Federation of Clinical Chemistry and Laboratory Medicine recommendations. sd-LDL-c concentration was measured in serums from 79 subjects with no lipid metabolism abnormalities. Results The Passing–Bablok regression equation is y = 1.52 (0.72 to 1.73) + 0.07x (−0.1 to 0.13), demonstrating no significant statistical differences between the modified precipitation method and the ultracentrifugation reference method. Similarly, no differences were detected when considering only sd-LDL-c from dyslipidemic patients, since the modifications added to the precipitation method facilitated the proper sedimentation of triglycerides and other lipoproteins. The reference interval for sd-LDL-c concentration estimated in a Mediterranean population was 0.04–0.47 mmol/L. Conclusion An optimization of the heparin-Mg2+ precipitation method for sd-LDL particle isolation was performed, and reference intervals were established in a Spanish Mediterranean population. Measured values were equivalent to those obtained with the reference method, assuring its clinical

  2. Combining independent de novo assemblies optimizes the coding transcriptome for nonconventional model eukaryotic organisms.

    PubMed

    Cerveau, Nicolas; Jackson, Daniel J

    2016-12-09

    Next-generation sequencing (NGS) technologies are arguably the most revolutionary technical development to join the list of tools available to molecular biologists since PCR. For researchers working with nonconventional model organisms one major problem with the currently dominant NGS platform (Illumina) stems from the obligatory fragmentation of nucleic acid material that occurs prior to sequencing during library preparation. This step creates a significant bioinformatic challenge for accurate de novo assembly of novel transcriptome data. This challenge becomes apparent when a variety of modern assembly tools (of which there is no shortage) are applied to the same raw NGS dataset. With the same assembly parameters these tools can generate markedly different assembly outputs. In this study we present an approach that generates an optimized consensus de novo assembly of eukaryotic coding transcriptomes. This approach does not represent a new assembler, rather it combines the outputs of a variety of established assembly packages, and removes redundancy via a series of clustering steps. We test and validate our approach using Illumina datasets from six phylogenetically diverse eukaryotes (three metazoans, two plants and a yeast) and two simulated datasets derived from metazoan reference genome annotations. All of these datasets were assembled using three currently popular assembly packages (CLC, Trinity and IDBA-tran). In addition, we experimentally demonstrate that transcripts unique to one particular assembly package are likely to be bioinformatic artefacts. For all eight datasets our pipeline generates more concise transcriptomes that in fact possess more unique annotatable protein domains than any of the three individual assemblers we employed. Another measure of assembly completeness (using the purpose built BUSCO databases) also confirmed that our approach yields more information. Our approach yields coding transcriptome assemblies that are more likely to be

  3. The Neural Code for Auditory Space Depends on Sound Frequency and Head Size in an Optimal Manner

    PubMed Central

    Harper, Nicol S.; Scott, Brian H.; Semple, Malcolm N.; McAlpine, David

    2014-01-01

    A major cue to the location of a sound source is the interaural time difference (ITD)–the difference in sound arrival time at the two ears. The neural representation of this auditory cue is unresolved. The classic model of ITD coding, dominant for a half-century, posits that the distribution of best ITDs (the ITD evoking a neuron’s maximal response) is unimodal and largely within the range of ITDs permitted by head-size. This is often interpreted as a place code for source location. An alternative model, based on neurophysiology in small mammals, posits a bimodal distribution of best ITDs with exquisite sensitivity to ITDs generated by means of relative firing rates between the distributions. Recently, an optimal-coding model was proposed, unifying the disparate features of these two models under the framework of efficient coding by neural populations. The optimal-coding model predicts that distributions of best ITDs depend on head size and sound frequency: for high frequencies and large heads it resembles the classic model, for low frequencies and small head sizes it resembles the bimodal model. The optimal-coding model makes key, yet unobserved, predictions: for many species, including humans, both forms of neural representation are employed, depending on sound frequency. Furthermore, novel representations are predicted for intermediate frequencies. Here, we examine these predictions in neurophysiological data from five mammalian species: macaque, guinea pig, cat, gerbil and kangaroo rat. We present the first evidence supporting these untested predictions, and demonstrate that different representations appear to be employed at different sound frequencies in the same species. PMID:25372405

  4. The neural code for auditory space depends on sound frequency and head size in an optimal manner.

    PubMed

    Harper, Nicol S; Scott, Brian H; Semple, Malcolm N; McAlpine, David

    2014-01-01

    A major cue to the location of a sound source is the interaural time difference (ITD)-the difference in sound arrival time at the two ears. The neural representation of this auditory cue is unresolved. The classic model of ITD coding, dominant for a half-century, posits that the distribution of best ITDs (the ITD evoking a neuron's maximal response) is unimodal and largely within the range of ITDs permitted by head-size. This is often interpreted as a place code for source location. An alternative model, based on neurophysiology in small mammals, posits a bimodal distribution of best ITDs with exquisite sensitivity to ITDs generated by means of relative firing rates between the distributions. Recently, an optimal-coding model was proposed, unifying the disparate features of these two models under the framework of efficient coding by neural populations. The optimal-coding model predicts that distributions of best ITDs depend on head size and sound frequency: for high frequencies and large heads it resembles the classic model, for low frequencies and small head sizes it resembles the bimodal model. The optimal-coding model makes key, yet unobserved, predictions: for many species, including humans, both forms of neural representation are employed, depending on sound frequency. Furthermore, novel representations are predicted for intermediate frequencies. Here, we examine these predictions in neurophysiological data from five mammalian species: macaque, guinea pig, cat, gerbil and kangaroo rat. We present the first evidence supporting these untested predictions, and demonstrate that different representations appear to be employed at different sound frequencies in the same species.

  5. Optimizing coding and reimbursement to improve management of Alzheimer's disease and related dementias.

    PubMed

    Fillit, Howard; Geldmacher, David S; Welter, Richard Todd; Maslow, Katie; Fraser, Malcolm

    2002-11-01

    The objectives of this study were to review the diagnostic, International Classification of Disease, 9th Revision, Clinical Modification (ICD-9-CM), diagnosis related groups (DRGs), and common procedural terminology (CPT) coding and reimbursement issues (including Medicare Part B reimbursement for physicians) encountered in caring for patients with Alzheimer's disease and related dementias (ADRD); to review the implications of these policies for the long-term clinical management of the patient with ADRD; and to provide recommendations for promoting appropriate recognition and reimbursement for clinical services provided to ADRD patients. Relevant English-language articles identified from MEDLINE about ADRD prevalence estimates; disease morbidity and mortality; diagnostic coding practices for ADRD; and Medicare, Medicaid, and managed care organization data on diagnostic coding and reimbursement were reviewed. Alzheimer's disease (AD) is grossly undercoded. Few AD cases are recognized at an early stage. Only 13% of a group of patients receiving the AD therapy donepezil had AD as the primary diagnosis, and AD is rarely included as a primary or secondary DRG diagnosis when the condition precipitating admission to the hospital is caused by AD. In addition, AD is often not mentioned on death certificates, although it may be the proximate cause of death. There is only one ICD-9-CM code for AD-331.0-and no clinical modification codes, despite numerous complications that can be directly attributed to AD. Medicare carriers consider ICD-9 codes for senile dementia (290 series) to be mental health codes and pay them at a lower rate than medical codes. DRG coding is biased against recognition of ADRD as an acute, admitting diagnosis. The CPT code system is an impediment to quality of care for ADRD patients because the complex, time-intensive services ADRD patients require are not adequately, if at all, reimbursed. Also, physicians treating significant numbers of AD patients are

  6. Insertion of operation-and-indicate instructions for optimized SIMD code

    DOEpatents

    Eichenberger, Alexander E; Gara, Alan; Gschwind, Michael K

    2013-06-04

    Mechanisms are provided for inserting indicated instructions for tracking and indicating exceptions in the execution of vectorized code. A portion of first code is received for compilation. The portion of first code is analyzed to identify non-speculative instructions performing designated non-speculative operations in the first code that are candidates for replacement by replacement operation-and-indicate instructions that perform the designated non-speculative operations and further perform an indication operation for indicating any exception conditions corresponding to special exception values present in vector register inputs to the replacement operation-and-indicate instructions. The replacement is performed and second code is generated based on the replacement of the at least one non-speculative instruction. The data processing system executing the compiled code is configured to store special exception values in vector output registers, in response to a speculative instruction generating an exception condition, without initiating exception handling.

  7. Dose optimization with first-order total-variation minimization for dense angularly sampled and sparse intensity modulated radiation therapy (DASSIM-RT)

    PubMed Central

    Kim, Hojin; Li, Ruijiang; Lee, Rena; Goldstein, Thomas; Boyd, Stephen; Candes, Emmanuel; Xing, Lei

    2012-01-01

    Purpose: A new treatment scheme coined as dense angularly sampled and sparse intensity modulated radiation therapy (DASSIM-RT) has recently been proposed to bridge the gap between IMRT and VMAT. By increasing the angular sampling of radiation beams while eliminating dispensable segments of the incident fields, DASSIM-RT is capable of providing improved conformity in dose distributions while maintaining high delivery efficiency. The fact that DASSIM-RT utilizes a large number of incident beams represents a major computational challenge for the clinical applications of this powerful treatment scheme. The purpose of this work is to provide a practical solution to the DASSIM-RT inverse planning problem. Methods: The inverse planning problem is formulated as a fluence-map optimization problem with total-variation (TV) minimization. A newly released L1-solver, template for first-order conic solver (TFOCS), was adopted in this work. TFOCS achieves faster convergence with less memory usage as compared with conventional quadratic programming (QP) for the TV form through the effective use of conic forms, dual-variable updates, and optimal first-order approaches. As such, it is tailored to specifically address the computational challenges of large-scale optimization in DASSIM-RT inverse planning. Two clinical cases (a prostate and a head and neck case) are used to evaluate the effectiveness and efficiency of the proposed planning technique. DASSIM-RT plans with 15 and 30 beams are compared with conventional IMRT plans with 7 beams in terms of plan quality and delivery efficiency, which are quantified by conformation number (CN), the total number of segments and modulation index, respectively. For optimization efficiency, the QP-based approach was compared with the proposed algorithm for the DASSIM-RT plans with 15 beams for both cases. Results: Plan quality improves with an increasing number of incident beams, while the total number of segments is maintained to be about the

  8. Dose optimization with first-order total-variation minimization for dense angularly sampled and sparse intensity modulated radiation therapy (DASSIM-RT).

    PubMed

    Kim, Hojin; Li, Ruijiang; Lee, Rena; Goldstein, Thomas; Boyd, Stephen; Candes, Emmanuel; Xing, Lei

    2012-07-01

    A new treatment scheme coined as dense angularly sampled and sparse intensity modulated radiation therapy (DASSIM-RT) has recently been proposed to bridge the gap between IMRT and VMAT. By increasing the angular sampling of radiation beams while eliminating dispensable segments of the incident fields, DASSIM-RT is capable of providing improved conformity in dose distributions while maintaining high delivery efficiency. The fact that DASSIM-RT utilizes a large number of incident beams represents a major computational challenge for the clinical applications of this powerful treatment scheme. The purpose of this work is to provide a practical solution to the DASSIM-RT inverse planning problem. The inverse planning problem is formulated as a fluence-map optimization problem with total-variation (TV) minimization. A newly released L1-solver, template for first-order conic solver (TFOCS), was adopted in this work. TFOCS achieves faster convergence with less memory usage as compared with conventional quadratic programming (QP) for the TV form through the effective use of conic forms, dual-variable updates, and optimal first-order approaches. As such, it is tailored to specifically address the computational challenges of large-scale optimization in DASSIM-RT inverse planning. Two clinical cases (a prostate and a head and neck case) are used to evaluate the effectiveness and efficiency of the proposed planning technique. DASSIM-RT plans with 15 and 30 beams are compared with conventional IMRT plans with 7 beams in terms of plan quality and delivery efficiency, which are quantified by conformation number (CN), the total number of segments and modulation index, respectively. For optimization efficiency, the QP-based approach was compared with the proposed algorithm for the DASSIM-RT plans with 15 beams for both cases. Plan quality improves with an increasing number of incident beams, while the total number of segments is maintained to be about the same in both cases. For the

  9. Dose optimization with first-order total-variation minimization for dense angularly sampled and sparse intensity modulated radiation therapy (DASSIM-RT).

    PubMed

    Kim, Hojin; Li, Ruijiang; Lee, Rena; Goldstein, Thomas; Boyd, Stephen; Candes, Emmanuel; Xing, Lei

    2012-07-01

    A new treatment scheme coined as dense angularly sampled and sparse intensity modulated radiation therapy (DASSIM-RT) has recently been proposed to bridge the gap between IMRT and VMAT. By increasing the angular sampling of radiation beams while eliminating dispensable segments of the incident fields, DASSIM-RT is capable of providing improved conformity in dose distributions while maintaining high delivery efficiency. The fact that DASSIM-RT utilizes a large number of incident beams represents a major computational challenge for the clinical applications of this powerful treatment scheme. The purpose of this work is to provide a practical solution to the DASSIM-RT inverse planning problem. The inverse planning problem is formulated as a fluence-map optimization problem with total-variation (TV) minimization. A newly released L1-solver, template for first-order conic solver (TFOCS), was adopted in this work. TFOCS achieves faster convergence with less memory usage as compared with conventional quadratic programming (QP) for the TV form through the effective use of conic forms, dual-variable updates, and optimal first-order approaches. As such, it is tailored to specifically address the computational challenges of large-scale optimization in DASSIM-RT inverse planning. Two clinical cases (a prostate and a head and neck case) are used to evaluate the effectiveness and efficiency of the proposed planning technique. DASSIM-RT plans with 15 and 30 beams are compared with conventional IMRT plans with 7 beams in terms of plan quality and delivery efficiency, which are quantified by conformation number (CN), the total number of segments and modulation index, respectively. For optimization efficiency, the QP-based approach was compared with the proposed algorithm for the DASSIM-RT plans with 15 beams for both cases. Plan quality improves with an increasing number of incident beams, while the total number of segments is maintained to be about the same in both cases. For the

  10. Code-Switching and the Optimal Grammar of Bilingual Language Use

    ERIC Educational Resources Information Center

    Bhatt, Rakesh M.; Bolonyai, Agnes

    2011-01-01

    In this article, we provide a framework of bilingual grammar that offers a theoretical understanding of the socio-cognitive bases of code-switching in terms of five general principles that, individually or through interaction with each other, explain how and why specific instances of code-switching arise. We provide cross-linguistic empirical…

  11. Code-Switching and the Optimal Grammar of Bilingual Language Use

    ERIC Educational Resources Information Center

    Bhatt, Rakesh M.; Bolonyai, Agnes

    2011-01-01

    In this article, we provide a framework of bilingual grammar that offers a theoretical understanding of the socio-cognitive bases of code-switching in terms of five general principles that, individually or through interaction with each other, explain how and why specific instances of code-switching arise. We provide cross-linguistic empirical…

  12. Experiences in the Performance Analysis and Optimization of a Deterministic Radiation Transport Code on the Cray SV1

    SciTech Connect

    Peter Cebull

    2004-05-01

    The Attila radiation transport code, which solves the Boltzmann neutron transport equation on three-dimensional unstructured tetrahedral meshes, was ported to a Cray SV1. Cray's performance analysis tools pointed to two subroutines that together accounted for 80%-90% of the total CPU time. Source code modifications were performed to enable vectorization of the most significant loops, to correct unfavorable strides through memory, and to replace a conjugate gradient solver subroutine with a call to the Cray Scientific Library. These optimizations resulted in a speedup of 7.79 for the INEEL's largest ATR model. Parallel scalability of the OpenMP version of the code is also discussed, and timing results are given for other non-vector platforms.

  13. Approaches in highly parameterized inversion - PEST++, a Parameter ESTimation code optimized for large environmental models

    USGS Publications Warehouse

    Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.

    2012-01-01

    An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.

  14. GPU-optimized Code for Long-term Simulations of Beam-beam Effects in Colliders

    SciTech Connect

    Roblin, Yves; Morozov, Vasiliy; Terzic, Balsa; Aturban, Mohamed A.; Ranjan, D.; Zubair, Mohammed

    2013-06-01

    We report on the development of the new code for long-term simulation of beam-beam effects in particle colliders. The underlying physical model relies on a matrix-based arbitrary-order symplectic particle tracking for beam transport and the Bassetti-Erskine approximation for beam-beam interaction. The computations are accelerated through a parallel implementation on a hybrid GPU/CPU platform. With the new code, a previously computationally prohibitive long-term simulations become tractable. We use the new code to model the proposed medium-energy electron-ion collider (MEIC) at Jefferson Lab.

  15. SIFT-based dense pixel tracking on 0.35 T cine-MR images acquired during image-guided radiation therapy with application to gating optimization

    SciTech Connect

    Mazur, Thomas R. E-mail: hli@radonc.wustl.edu; Fischer-Valuck, Benjamin W.; Wang, Yuhe; Yang, Deshan; Mutic, Sasa; Li, H. Harold E-mail: hli@radonc.wustl.edu

    2016-01-15

    Purpose: To first demonstrate the viability of applying an image processing technique for tracking regions on low-contrast cine-MR images acquired during image-guided radiation therapy, and then outline a scheme that uses tracking data for optimizing gating results in a patient-specific manner. Methods: A first-generation MR-IGRT system—treating patients since January 2014—integrates a 0.35 T MR scanner into an annular gantry consisting of three independent Co-60 sources. Obtaining adequate frame rates for capturing relevant patient motion across large fields-of-view currently requires coarse in-plane spatial resolution. This study initially (1) investigate the feasibility of rapidly tracking dense pixel correspondences across single, sagittal plane images (with both moderate signal-to-noise and spatial resolution) using a matching objective for highly descriptive vectors called scale-invariant feature transform (SIFT) descriptors associated to all pixels that describe intensity gradients in local regions around each pixel. To more accurately track features, (2) harmonic analysis was then applied to all pixel trajectories within a region-of-interest across a short training period. In particular, the procedure adjusts the motion of outlying trajectories whose relative spectral power within a frequency bandwidth consistent with respiration (or another form of periodic motion) does not exceed a threshold value that is manually specified following the training period. To evaluate the tracking reliability after applying this correction, conventional metrics—including Dice similarity coefficients (DSCs), mean tracking errors (MTEs), and Hausdorff distances (HD)—were used to compare target segmentations obtained via tracking to manually delineated segmentations. Upon confirming the viability of this descriptor-based procedure for reliably tracking features, the study (3) outlines a scheme for optimizing gating parameters—including relative target position and a

  16. Optimization of GATE and PHITS Monte Carlo code parameters for spot scanning proton beam based on simulation with FLUKA general-purpose code

    NASA Astrophysics Data System (ADS)

    Kurosu, Keita; Das, Indra J.; Moskvin, Vadim P.

    2016-01-01

    Spot scanning, owing to its superior dose-shaping capability, provides unsurpassed dose conformity, in particular for complex targets. However, the robustness of the delivered dose distribution and prescription has to be verified. Monte Carlo (MC) simulation has the potential to generate significant advantages for high-precise particle therapy, especially for medium containing inhomogeneities. However, the inherent choice of computational parameters in MC simulation codes of GATE, PHITS and FLUKA that is observed for uniform scanning proton beam needs to be evaluated. This means that the relationship between the effect of input parameters and the calculation results should be carefully scrutinized. The objective of this study was, therefore, to determine the optimal parameters for the spot scanning proton beam for both GATE and PHITS codes by using data from FLUKA simulation as a reference. The proton beam scanning system of the Indiana University Health Proton Therapy Center was modeled in FLUKA, and the geometry was subsequently and identically transferred to GATE and PHITS. Although the beam transport is managed by spot scanning system, the spot location is always set at the center of a water phantom of 600 × 600 × 300 mm3, which is placed after the treatment nozzle. The percentage depth dose (PDD) is computed along the central axis using 0.5 × 0.5 × 0.5 mm3 voxels in the water phantom. The PDDs and the proton ranges obtained with several computational parameters are then compared to those of FLUKA, and optimal parameters are determined from the accuracy of the proton range, suppressed dose deviation, and computational time minimization. Our results indicate that the optimized parameters are different from those for uniform scanning, suggesting that the gold standard for setting computational parameters for any proton therapy application cannot be determined consistently since the impact of setting parameters depends on the proton irradiation technique. We

  17. [Non elective cesarean section: use of a color code to optimize management of obstetric emergencies].

    PubMed

    Rudigoz, René-Charles; Huissoud, Cyril; Delecour, Lisa; Thevenet, Simone; Dupont, Corinne

    2014-06-01

    The medical team of the Croix Rousse teaching hospital maternity unit has developed, over the last ten years, a set of procedures designed to respond to various emergency situations necessitating Caesarean section. Using the Lucas classification, we have defined as precisely as possible the degree of urgency of Caesarian sections. We have established specific protocols for the implementation of urgent and very urgent Caesarean section and have chosen a simple means to convey the degree of urgency to all team members, namely a color code system (red, orange and green). We have set time goals from decision to delivery: 15 minutes for the red code and 30 minutes for the orange code. The results seem very positive: The frequency of urgent and very urgent Caesareans has fallen over time, from 6.1 % to 1.6% in 2013. The average time from decision to delivery is 11 minutes for code red Caesareans and 21 minutes for code orange Caesareans. These time goals are now achieved in 95% of cases. Organizational and anesthetic difficulties are the main causes of delays. The indications for red and orange code Caesarians are appropriate more than two times out of three. Perinatal outcomes are generally favorable, code red Caesarians being life-saving in 15% of cases. No increase in maternal complications has been observed. In sum: Each obstetric department should have its own protocols for handling urgent and very urgent Caesarean sections. Continuous monitoring of their implementation, relevance and results should be conducted Management of extreme urgency must be integrated into the management of patients with identified risks (scarred uterus and twin pregnancies for example), and also in structures without medical facilities (birthing centers). Obstetric teams must keep in mind that implementation of these protocols in no way dispenses with close monitoring of labour.

  18. Optimizing information on drug exposure by collection of package code information in questionnaire surveys.

    PubMed

    Quinzler, R; Schmitt, S P W; Szecsenyi, J; Haefeli, W E

    2007-09-01

    The thorough analysis of special drug characteristics requires information on the specific brand of a drug. This information is often not sought in pharmacoepidemiologic surveys although in many countries packages are labelled with an unequivocal code (in Germany called Pharmazentralnummer (PZN)). We aimed to assess the benefit and quality of PZN information collected in self-completed questionnaires. We performed a survey in 905 ambulatory patients who were asked to list brand name, strength, and the PZN of all drugs they were taking. The medication list was completed by 97.5% (n = 882) of the responding patients (mean age 67.3 years). Altogether 5543 drugs (100%) were mentioned in the questionnaires and for 4230 (76.3%) the exact drug package could be allocated on the basis of the PZN. When PZN was considered in addition to the drug name the quality of drug coding was significantly improved (p < 0.001) with regard to the allocation of drug package (74% versus 2%), brand (90% versus 70%), and strength (96% versus 86%). The time needed for drug coding was three times shorter. The high response rate and high fraction of correct PZN indicate that the collection of package code information is a valuable method to achieve more accurate drug data in questionnaire surveys and to facilitate the drug coding procedure. Copyright (c) 2007 John Wiley & Sons, Ltd.

  19. Lifting scheme-based method for joint coding 3D stereo digital cinema with luminace correction and optimized prediction

    NASA Astrophysics Data System (ADS)

    Darazi, R.; Gouze, A.; Macq, B.

    2009-01-01

    Reproducing a natural and real scene as we see in the real world everyday is becoming more and more popular. Stereoscopic and multi-view techniques are used for this end. However due to the fact that more information are displayed requires supporting technologies such as digital compression to ensure the storage and transmission of the sequences. In this paper, a new scheme for stereo image coding is proposed. The original left and right images are jointly coded. The main idea is to optimally exploit the existing correlation between the two images. This is done by the design of an efficient transform that reduces the existing redundancy in the stereo image pair. This approach was inspired by Lifting Scheme (LS). The novelty in our work is that the prediction step is been replaced by an hybrid step that consists in disparity compensation followed by luminance correction and an optimized prediction step. The proposed scheme can be used for lossless and for lossy coding. Experimental results show improvement in terms of performance and complexity compared to recently proposed methods.

  20. Optimized sign language video coding based on eye-tracking analysis

    NASA Astrophysics Data System (ADS)

    Agrafiotis, Dimitris; Canagarajah, C. N.; Bull, David R.; Dye, Matt; Twyford, Helen; Kyle, Jim; Chung How, James

    2003-06-01

    The imminent arrival of mobile video telephony will enable deaf people to communicate - as hearing people have been able to do for a some time now - anytime/anywhere in their own language sign language. At low bit rates coding of sign language sequences is very challenging due to the high level of motion and the need to maintain good image quality to aid with understanding. This paper presents optimised coding of sign language video at low bit rates in a way that will favour comprehension of the compressed material by deaf users. Our coding suggestions are based on an eye-tracking study that we have conducted which allows us to analyse the visual attention of sign language viewers. The results of this study are included in this paper. Analysis and results for two coding methods, one using MPEG-4 video objects and the second using foveation filtering are presented. Results with foveation filtering are very promising, offering a considerable decrease in bit rate in a way which is compatible with the visual attention patterns of deaf people, as these were recorded in the eye tracking study.

  1. A study of the optimization method used in the NAVY/NASA gas turbine engine computer code

    NASA Technical Reports Server (NTRS)

    Horsewood, J. L.; Pines, S.

    1977-01-01

    Sources of numerical noise affecting the convergence properties of the Powell's Principal Axis Method of Optimization in the NAVY/NASA gas turbine engine computer code were investigated. The principal noise source discovered resulted from loose input tolerances used in terminating iterations performed in subroutine CALCFX to satisfy specified control functions. A minor source of noise was found to be introduced by an insufficient number of digits in stored coefficients used by subroutine THERM in polynomial expressions of thermodynamic properties. Tabular results of several computer runs are presented to show the effects on program performance of selective corrective actions taken to reduce noise.

  2. Program user's manual for optimizing the design of a liquid or gaseous propellant rocket engine with the automated combustor design code AUTOCOM

    NASA Technical Reports Server (NTRS)

    Reichel, R. H.; Hague, D. S.; Jones, R. T.; Glatt, C. R.

    1973-01-01

    This computer program manual describes in two parts the automated combustor design optimization code AUTOCOM. The program code is written in the FORTRAN 4 language. The input data setup and the program outputs are described, and a sample engine case is discussed. The program structure and programming techniques are also described, along with AUTOCOM program analysis.

  3. Design, decoding and optimized implementation of SECDED codes over GF(q)

    DOEpatents

    Ward, H Lee; Ganti, Anand; Resnick, David R

    2014-06-17

    A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.

  4. Decoding and optimized implementation of SECDED codes over GF(q)

    DOEpatents

    Ward, H Lee; Ganti, Anand; Resnick, David R

    2014-11-18

    A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.

  5. Decoding and optimized implementation of SECDED codes over GF(q)

    DOEpatents

    Ward, H. Lee; Ganti, Anand; Resnick, David R

    2013-10-22

    A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.

  6. Code optimization of the subroutine to remove near identical matches in the sequence database homology search tool PSI-BLAST.

    PubMed

    Aspnäs, Mats; Mattila, Kimmo; Osowski, Kristoffer; Westerholm, Jan

    2010-06-01

    A central task in protein sequence characterization is the use of a sequence database homology search tool to find similar protein sequences in other individuals or species. PSI-BLAST is a widely used module of the BLAST package that calculates a position-specific score matrix from the best matching sequences and performs iterated searches using a method to avoid many similar sequences for the score. For some queries and parameter settings, PSI-BLAST may find many similar high-scoring matches, and therefore up to 80% of the total run time may be spent in this procedure. In this article, we present code optimizations that improve the cache utilization and the overall performance of this procedure. Measurements show that, for queries where the number of similar matches is high, the optimized PSI-BLAST program may be as much as 2.9 times faster than the original program.

  7. Combining nanocharacter printing, digital watermarking, and UV-coded taggents for optimal machine-readable security

    NASA Astrophysics Data System (ADS)

    Phillips, George K.

    2002-04-01

    The ability to combine printed encrypted nano/micro structures and nano alpha/numeric algorithms - 'NaNOcopy'/'LogoDot' - with embedded digital hidden data, - 'digital watermark' - and/or coded UV taggents - TechMark to create the ultimate machine readable Lock - Hide a Key - Key protection for documents or packaging security is new. Extreme minute nano characters, structures, photographs, or logos, can be printed on a document in a specific pattern configured for forming an anti-copy latent warning message, which appears when copied. The NaNOcopy structures or LogoDots are uniquely micro printed to formulate certain encrypted information or algorithm calculation for further verification and protection from counterfeiting or alteration. Major companies such as IBM, Xerox, Digimark and Spectra Systems are presently offering digital watermarking technologies to secure both digital and analog content. Appleton Security Products has a VeriCam hand held reader, which can detect the combination of a substrate embedded UV coded taggent, TechMark, with the presence of other data such as a digital watermark and NaNOcopy/LogoDot printing. Unless the reader identifies the presence of the TechMark UV coded taggents, the data carrier cannot be opened.

  8. DENSE MEDIUM CYCLONE OPTIMIZATON

    SciTech Connect

    Gerald H. Luttrell; Chris J. Barbee; Peter J. Bethell; Chris J. Wood

    2005-06-30

    Dense medium cyclones (DMCs) are known to be efficient, high-tonnage devices suitable for upgrading particles in the 50 to 0.5 mm size range. This versatile separator, which uses centrifugal forces to enhance the separation of fine particles that cannot be upgraded in static dense medium separators, can be found in most modern coal plants and in a variety of mineral plants treating iron ore, dolomite, diamonds, potash and lead-zinc ores. Due to the high tonnage, a small increase in DMC efficiency can have a large impact on plant profitability. Unfortunately, the knowledge base required to properly design and operate DMCs has been seriously eroded during the past several decades. In an attempt to correct this problem, a set of engineering tools have been developed to allow producers to improve the efficiency of their DMC circuits. These tools include (1) low-cost density tracers that can be used by plant operators to rapidly assess DMC performance, (2) mathematical process models that can be used to predict the influence of changes in operating and design variables on DMC performance, and (3) an expert advisor system that provides plant operators with a user-friendly interface for evaluating, optimizing and trouble-shooting DMC circuits. The field data required to develop these tools was collected by conducting detailed sampling and evaluation programs at several industrial plant sites. These data were used to demonstrate the technical, economic and environmental benefits that can be realized through the application of these engineering tools.

  9. Optimizing the FEDVR-TDCC code for exploring the quantum dynamics of two-electron systems in intense laser pulses.

    PubMed

    Hu, S X

    2010-05-01

    To efficiently solve the three-dimensional (3D) time-dependent linear and nonlinear Schrödinger equation, we have developed a large-scale parallel code RSP-FEDVR [B. I. Schneider, L. A. Collins, and S. X. Hu, Phys. Rev. E 73, 036708 (2006)], which combines the finite-element discrete variable representation (FEDVR) with the real-space product algorithm. Using the similar algorithm, we have derived an accurate approach to solve the time-dependent close-coupling (TDCC) equation for exploring two-electron dynamics in linearly polarized intense laser pulses. However, when the number (N) of partial waves used for the TDCC expansion increases, the FEDVR-TDCC code unfortunately slows down, because the potential-matrix operation scales as ∼O(N2) . In this paper, we show that the full potential-matrix operation can be decomposed into a series of small-matrix operations utilizing the sparse property of the [N×N] potential matrix. Such optimization speeds up the FEDVR-TDCC code by an order of magnitude for N=256 . This may facilitate the ultimate solution to the 3D two-electron quantum dynamics in ultrashort intense optical laser pulses, where a large number of partial waves are required.

  10. Integer-Linear-Programing Optimization in Scalable Video Multicast with Adaptive Modulation and Coding in Wireless Networks

    PubMed Central

    Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm. PMID:25276862

  11. Integer-linear-programing optimization in scalable video multicast with adaptive modulation and coding in wireless networks.

    PubMed

    Lee, Dongyul; Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.

  12. Image gathering, coding, and processing: End-to-end optimization for efficient and robust acquisition of visual information

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; Fales, Carl L.

    1990-01-01

    Researchers are concerned with the end-to-end performance of image gathering, coding, and processing. The applications range from high-resolution television to vision-based robotics, wherever the resolution, efficiency and robustness of visual information acquisition and processing are critical. For the presentation at this workshop, it is convenient to divide research activities into the following two overlapping areas: The first is the development of focal-plane processing techniques and technology to effectively combine image gathering with coding, with an emphasis on low-level vision processing akin to the retinal processing in human vision. The approach includes the familiar Laplacian pyramid, the new intensity-dependent spatial summation, and parallel sensing/processing networks. Three-dimensional image gathering is attained by combining laser ranging with sensor-array imaging. The second is the rigorous extension of information theory and optimal filtering to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing.

  13. Optimization of a photoneutron source based on 10 MeV electron beam using Geant4 Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Askri, Boubaker

    2015-10-01

    Geant4 Monte Carlo code has been used to conceive and optimize a simple and compact neutron source based on a 10 MeV electron beam impinging on a tungsten target adjoined to a beryllium target. For this purpose, a precise photonuclear reaction cross-section model issued from the International Atomic Energy Agency (IAEA) database was linked to Geant4 to accurately simulate the interaction of low energy bremsstrahlung photons with beryllium material. A benchmark test showed that a good agreement was achieved when comparing the emitted neutron flux spectra predicted by Geant4 and Fluka codes for a beryllium cylinder bombarded with a 5 MeV photon beam. The source optimization was achieved through a two stage Monte Carlo simulation. In the first stage, the distributions of the seven phase space coordinates of the bremsstrahlung photons at the boundaries of the tungsten target were determined. In the second stage events corresponding to photons emitted according to these distributions were tracked. A neutron yield of 4.8 × 1010 neutrons/mA/s was obtained at 20 cm from the beryllium target. A thermal neutron yield of 1.5 × 109 neutrons/mA/s was obtained after introducing a spherical shell of polyethylene as a neutron moderator.

  14. Code Optimization for the Choi-Williams Distribution for ELINT Applications

    DTIC Science & Technology

    2009-12-01

    Applied Mathematics Series-55, Issued June 1964, Seventh Printing, May 1968, with corrections. [13] Oppenheim & Schafer, Digital Signal Processing ... Phillip E. Pace i REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is...PAGES 98 14. SUBJECT TERMS Choi-Williams Distribution, Signal Processing , Algorithm Optimization, C programming, Low Probability of Intercept (LPI

  15. Quality optimized medical image information hiding algorithm that employs edge detection and data coding.

    PubMed

    Al-Dmour, Hayat; Al-Ani, Ahmed

    2016-04-01

    The present work has the goal of developing a secure medical imaging information system based on a combined steganography and cryptography technique. It attempts to securely embed patient's confidential information into his/her medical images. The proposed information security scheme conceals coded Electronic Patient Records (EPRs) into medical images in order to protect the EPRs' confidentiality without affecting the image quality and particularly the Region of Interest (ROI), which is essential for diagnosis. The secret EPR data is converted into ciphertext using private symmetric encryption method. Since the Human Visual System (HVS) is less sensitive to alterations in sharp regions compared to uniform regions, a simple edge detection method has been introduced to identify and embed in edge pixels, which will lead to an improved stego image quality. In order to increase the embedding capacity, the algorithm embeds variable number of bits (up to 3) in edge pixels based on the strength of edges. Moreover, to increase the efficiency, two message coding mechanisms have been utilized to enhance the ±1 steganography. The first one, which is based on Hamming code, is simple and fast, while the other which is known as the Syndrome Trellis Code (STC), is more sophisticated as it attempts to find a stego image that is close to the cover image through minimizing the embedding impact. The proposed steganography algorithm embeds the secret data bits into the Region of Non Interest (RONI), where due to its importance; the ROI is preserved from modifications. The experimental results demonstrate that the proposed method can embed large amount of secret data without leaving a noticeable distortion in the output image. The effectiveness of the proposed algorithm is also proven using one of the efficient steganalysis techniques. The proposed medical imaging information system proved to be capable of concealing EPR data and producing imperceptible stego images with minimal

  16. Performance of an improved logarithmic phase mask with optimized parameters in a wavefront-coding system.

    PubMed

    Zhao, Hui; Li, Yingcai

    2010-01-10

    In two papers [Proc. SPIE 4471, 272-280 (2001) and Appl. Opt. 43, 2709-2721 (2004)], a logarithmic phase mask was proposed and proved to be effective in extending the depth of field; however, according to our research, this mask is not that perfect because the corresponding defocused modulation transfer function has large oscillations in the low-frequency region, even when the mask is optimized. So, in a previously published paper [Opt. Lett. 33, 1171-1173 (2008)], we proposed an improved logarithmic phase mask by making a small modification. The new mask can not only eliminate the drawbacks to a certain extent but can also be even less sensitive to focus errors according to Fisher information criteria. However, the performance comparison was carried out with the modified mask not being optimized, which was not reasonable. In this manuscript, we optimize the modified logarithmic phase mask first before analyzing its performance and more convincing results have been obtained based on the analysis of several frequently used metrics.

  17. Dense topological spaces and dense continuity

    NASA Astrophysics Data System (ADS)

    Aldwoah, Khaled A.

    2013-09-01

    There are several attempts to generalize (or "widen") the concept of topological space. This paper uses equivalence relations to generalize the concept of topological space via the concept of equivalence relations. By the generalization, we can introduce from particular topology on a nonempty set X many new topologies, we call anyone of these new topologies a dense topology. In addition, we formulate some simple properties of dense topologies and study suitable generalizations of the concepts of limit points, closeness and continuity, as well as Jackson, Nörlund and Hahn dense topologies.

  18. MagRad: A code to optimize the operation of superconducting magnets in a radiation environment

    SciTech Connect

    Yeaw, Christopher T.

    1995-01-01

    A powerful computational tool, called MagRad, has been developed which optimizes magnet design for operation in radiation fields. Specifically, MagRad has been used for the analysis and design modification of the cable-in-conduit conductors of the TF magnet systems in fusion reactor designs. Since the TF magnets must operate in a radiation environment which damages the material components of the conductor and degrades their performance, the optimization of conductor design must account not only for start-up magnet performance, but also shut-down performance. The degradation in performance consists primarily of three effects: reduced stability margin of the conductor; a transition out of the well-cooled operating regime; and an increased maximum quench temperature attained in the conductor. Full analysis of the magnet performance over the lifetime of the reactor includes: radiation damage to the conductor, stability, protection, steady state heat removal, shielding effectiveness, optimal annealing schedules, and finally costing of the magnet and reactor. Free variables include primary and secondary conductor geometric and compositional parameters, as well as fusion reactor parameters. A means of dealing with the radiation damage to the conductor, namely high temperature superconductor anneals, is proposed, examined, and demonstrated to be both technically feasible and cost effective. Additionally, two relevant reactor designs (ITER CDA and ARIES-II/IV) have been analyzed. Upon addition of pure copper strands to the cable, the ITER CDA TF magnet design was found to be marginally acceptable, although much room for both performance improvement and cost reduction exists. A cost reduction of 10-15% of the capital cost of the reactor can be achieved by adopting a suitable superconductor annealing schedule. In both of these reactor analyses, the performance predictive capability of MagRad and its associated costing techniques have been demonstrated.

  19. Optimal rate control for video coding based on a hybrid MMAX/MMSE criterion

    NASA Astrophysics Data System (ADS)

    Lee, Sang-Yong; Ortega, Antonio

    2003-05-01

    In this paper, we consider the problem of rate control for video transmission. We focus on finding off-line optimal rate control for constant bit-rate (CBR) transmission, where the size of the encoder buffer and the channel rate are the constraints. To ensure a maximum minimum quality is obtained over all data units (e.g., macro blocks, video frames or group-of-pictures), we use a minimum maximum distortion (MMAX) criterion for this buffer-constrained problem. We show that, due to the buffer constraints, a MMAX solution leads to a relatively low average distortion, because the total rate budget is not completely utilized. Therefore, after finding a MMAX solution, an additional minimization of average distortion criterion is proposed to increase overall quality of the data sequence by using remaining resources. The proposed algorithm (denoted MMAX+ as it incorporates both MMAX and the additional average quality optimization stage) leads to an increase in average quality with respect to the MMAX solution, while providing a much more constant quality than MMSE solutions. Moreover we show how the MMAX+ approach can be implemented with low complexity.

  20. Optimized multilevel codebook searching algorithm for vector quantization in image coding

    NASA Astrophysics Data System (ADS)

    Cao, Hugh Q.; Li, Weiping

    1996-02-01

    An optimized multi-level codebook searching algorithm (MCS) for vector quantization is presented in this paper. Although it belongs to the category of the fast nearest neighbor searching (FNNS) algorithms for vector quantization, the MCS algorithm is not a variation of any existing FNNS algorithms (such as k-d tree searching algorithm, partial-distance searching algorithm, triangle inequality searching algorithm...). A multi-level search theory has been introduced. The problem for the implementation of this theory has been solved by a specially defined irregular tree structure which can be built from a training set. This irregular tree structure is different from any tree structures used in TSVQ, prune tree VQ, quad tree VQ... Strictly speaking, it cannot be called tree structure since it allows one node has more than one set of parents, it is only a directed graph. This is the essential difference between MCS algorithm and other TSVQ algorithms which ensures its better performance. An efficient design procedure has been given to find the optimized irregular tree for practical source. The simulation results of applying MCS algorithm to image VQ show that this algorithm can reduce searching complexity to less than 3% of the exhaustive search vector quantization (ESVQ) (4096 codevectors and 16 dimension) while introducing negligible error (0.064 dB degradation from ESVQ). Simulation results also show that the searching complexity is close linearly increase with bitrate.

  1. Topology-Aware Performance Optimization and Modeling of Adaptive Mesh Refinement Codes for Exascale

    DOE PAGES

    Chan, Cy P.; Bachan, John D.; Kenny, Joseph P.; ...

    2017-01-26

    Here, we introduce a topology-aware performance optimization and modeling workflow for AMR simulation that includes two new modeling tools, ProgrAMR and Mota Mapper, which interface with the BoxLib AMR framework and the SSTmacro network simulator. ProgrAMR allows us to generate and model the execution of task dependency graphs from high-level specifications of AMR-based applications, which we demonstrate by analyzing two example AMR-based multigrid solvers with varying degrees of asynchrony. Mota Mapper generates multiobjective, network topology-aware box mappings, which we apply to optimize the data layout for the example multigrid solvers. While the sensitivity of these solvers to layout and executionmore » strategy appears to be modest for balanced scenarios, the impact of better mapping algorithms can be significant when performance is highly constrained by network hop latency. Furthermore, we show that network latency in the multigrid bottom solve is the main contributing factor preventing good scaling on exascale-class machines.« less

  2. A comprehensive method for preliminary design optimization of axial gas turbine stages. II - Code verification

    NASA Technical Reports Server (NTRS)

    Jenkins, R. M.

    1983-01-01

    The present effort represents an extension of previous work wherein a calculation model for performing rapid pitchline optimization of axial gas turbine geometry, including blade profiles, is developed. The model requires no specification of geometric constraints. Output includes aerodynamic performance (adiabatic efficiency), hub-tip flow-path geometry, blade chords, and estimates of blade shape. Presented herein is a verification of the aerodynamic performance portion of the model, whereby detailed turbine test-rig data, including rig geometry, is input to the model to determine whether tested performance can be predicted. An array of seven (7) NASA single-stage axial gas turbine configurations is investigated, ranging in size from 0.6 kg/s to 63.8 kg/s mass flow and in specific work output from 153 J/g to 558 J/g at design (hot) conditions; stage loading factor ranges from 1.15 to 4.66.

  3. Optimal coding-decoding for systems controlled via a communication channel

    NASA Astrophysics Data System (ADS)

    Yi-wei, Feng; Guo, Ge

    2013-12-01

    In this article, we study the problem of controlling plants over a signal-to-noise ratio (SNR) constrained communication channel. Different from previous research, this article emphasises the importance of the actual channel model and coder/decoder in the study of network performance. Our major objectives include coder/decoder design for an additive white Gaussian noise (AWGN) channel with both standard network configuration and Youla parameter network architecture. We find that the optimal coder and decoder can be realised for different network configuration. The results are useful in determining the minimum channel capacity needed in order to stabilise plants over communication channels. The coder/decoder obtained can be used to analyse the effect of uncertainty on the channel capacity. An illustrative example is provided to show the effectiveness of the results.

  4. ROCOPT: A user friendly interactive code to optimize rocket structural components

    NASA Technical Reports Server (NTRS)

    Rule, William K.

    1989-01-01

    ROCOPT is a user-friendly, graphically-interfaced, microcomputer-based computer program (IBM compatible) that optimizes rocket components by minimizing the structural weight. The rocket components considered are ring stiffened truncated cones and cylinders. The applied loading is static, and can consist of any combination of internal or external pressure, axial force, bending moment, and torque. Stress margins are calculated by means of simple closed form strength of material type equations. Stability margins are determined by approximate, orthotropic-shell, closed-form equations. A modified form of Powell's method, in conjunction with a modified form of the external penalty method, is used to determine the minimum weight of the structure subject to stress and stability margin constraints, as well as user input constraints on the structural dimensions. The graphical interface guides the user through the required data prompts, explains program options and graphically displays results for easy interpretation.

  5. BMI optimization by using parallel UNDX real-coded genetic algorithm with Beowulf cluster

    NASA Astrophysics Data System (ADS)

    Handa, Masaya; Kawanishi, Michihiro; Kanki, Hiroshi

    2007-12-01

    This paper deals with the global optimization algorithm of the Bilinear Matrix Inequalities (BMIs) based on the Unimodal Normal Distribution Crossover (UNDX) GA. First, analyzing the structure of the BMIs, the existence of the typical difficult structures is confirmed. Then, in order to improve the performance of algorithm, based on results of the problem structures analysis and consideration of BMIs characteristic properties, we proposed the algorithm using primary search direction with relaxed Linear Matrix Inequality (LMI) convex estimation. Moreover, in these algorithms, we propose two types of evaluation methods for GA individuals based on LMI calculation considering BMI characteristic properties more. In addition, in order to reduce computational time, we proposed parallelization of RCGA algorithm, Master-Worker paradigm with cluster computing technique.

  6. Steps towards verification and validation of the Fetch code for Level 2 analysis, design, and optimization of aqueous homogeneous reactors

    SciTech Connect

    Nygaard, E. T.; Pain, C. C.; Eaton, M. D.; Gomes, J. L. M. A.; Goddard, A. J. H.; Gorman, G.; Tollit, B.; Buchan, A. G.; Cooling, C. M.; Angelo, P. L.

    2012-07-01

    Babcock and Wilcox Technical Services Group (B and W) has identified aqueous homogeneous reactors (AHRs) as a technology well suited to produce the medical isotope molybdenum 99 (Mo-99). AHRs have never been specifically designed or built for this specialized purpose. However, AHRs have a proven history of being safe research reactors. In fact, in 1958, AHRs had 'a longer history of operation than any other type of research reactor using enriched fuel' and had 'experimentally demonstrated to be among the safest of all various type of research reactor now in use [1].' While AHRs have been modeled effectively using simplified 'Level 1' tools, the complex interactions between fluids, neutronics, and solid structures are important (but not necessarily safety significant). These interactions require a 'Level 2' modeling tool. Imperial College London (ICL) has developed such a tool: Finite Element Transient Criticality (FETCH). FETCH couples the radiation transport code EVENT with the computational fluid dynamics code (Fluidity), the result is a code capable of modeling sub-critical, critical, and super-critical solutions in both two-and three-dimensions. Using FETCH, ICL researchers and B and W engineers have studied many fissioning solution systems include the Tokaimura criticality accident, the Y12 accident, SILENE, TRACY, and SUPO. These modeling efforts will ultimately be incorporated into FETCH'S extensive automated verification and validation (V and V) test suite expanding FETCH'S area of applicability to include all relevant physics associated with AHRs. These efforts parallel B and W's engineering effort to design and optimize an AHR to produce Mo99. (authors)

  7. Dense Deposit Disease

    PubMed Central

    Smith, Richard J.H; Harris, Claire L.; Pickering, Matthew C.

    2011-01-01

    Dense deposit disease (DDD) is an orphan disease that primarily affects children and young adults without sexual predilection. Studies of its pathophysiology have shown conclusively that it is caused by fluid-phase dysregulation of the alternative pathway of complement, however the role played by genetics and autoantibodies like C3 nephritic factors must be more thoroughly defined if we are to make an impact in the clinical management of this disease. There are currently no mechanism-directed therapies to offer affected patients, half of whom progress to end stage renal failure disease within 10 years of diagnosis. Transplant recipients face the dim prospect of disease recurrence in their allografts, half of which ultimately fail. More detailed genetic and complement studies of DDD patients may make it possible to identify protective factors prognostic for naïve kidney and transplant survival, or conversely risk factors associated with progression to renal failure and allograft loss. The pathophysiology of DDD suggests that a number of different treatments warrant consideration. As advances are made in these areas, there will be a need to increase healthcare provider awareness of DDD by making resources available to clinicians to optimize care for DDD patients. PMID:21601923

  8. 2.5D Numerical Simulation of Excitation of Coherent Chain of Electron Wake-Field Bubbles by Optimal Non-Resonant Chain of Dense Relativistic Electron Bunches

    SciTech Connect

    Maslov, V. I.; Lotov, K. V.; Onishchenko, I. N.; Svistun, O. M.

    2010-06-16

    It is shown that optimal difference of frequencies of following of electron bunches and following of wake-field bubbles exists, so N-1 drive-bunches strengthen chain of wakefield bubbles and N-th bunch gets in maximal accelerating wakefield.

  9. Atoms in dense plasmas

    SciTech Connect

    More, R.M.

    1986-01-01

    Recent experiments with high-power pulsed lasers have strongly encouraged the development of improved theoretical understanding of highly charged ions in a dense plasma environment. This work examines the theory of dense plasmas with emphasis on general rules which govern matter at extreme high temperature and density. 106 refs., 23 figs.

  10. Simultaneously sparse and low-rank hyperspectral image recovery from coded aperture compressive measurements via convex optimization

    NASA Astrophysics Data System (ADS)

    Gélvez, Tatiana C.; Rueda, Hoover F.; Arguello, Henry

    2016-05-01

    A hyperspectral image (HSI) can be described as a set of images with spatial information across different spectral bands. Compressive spectral imaging techniques (CSI) permit to capture a 3-dimensional hyperspectral scene using 2 dimensional coded and multiplexed projections. Recovering the original scene from a very few projections can be valuable in applications such as remote sensing, video surveillance and biomedical imaging. Typically, HSI exhibit high correlations both, in the spatial and spectral dimensions. Thus, exploiting these correlations allows to accurately recover the original scene from compressed measurements. Traditional approaches exploit the sparsity of the scene when represented in a proper basis. For this purpose, an optimization problem that seeks to minimize a joint l2 - l1 norm is solved to obtain the original scene. However, there exist some HSI with an important feature which does not have been widely exploited; HSI are commonly low rank, thus only a few number of spectral signatures are presented in the image. Therefore, this paper proposes an approach to recover a simultaneous sparse and low rank hyperspectral image by exploiting both features at the same time. The proposed approach solves an optimization problem that seeks to minimize the l2-norm, penalized by the l1-norm, to force the solution to be sparse, and penalized by the nuclear norm to force the solution to be low rank. Theoretical analysis along with a set of simulations over different data sets show that simultaneously exploiting low rank and sparse structures enhances the performance of the recovery algorithm and the quality of the recovered image with an average improvement of around 3 dB in terms of the peak-signal to noise ratio (PSNR).

  11. User's guide for the BNW-III optimization code for modular dry/wet-cooled power plants

    SciTech Connect

    Braun, D.J.; Faletti, D.W.

    1984-09-01

    This user's guide describes BNW-III, a computer code developed by the Pacific Northwest Laboratory (PNL) as part of the Dry Cooling Enhancement Program sponsored by the US Department of Energy (DOE). The BNW-III code models a modular dry/wet cooling system for a nuclear or fossil fuel power plant. The purpose of this guide is to give the code user a brief description of what the BNW-III code is and how to use it. It describes the cooling system being modeled and the various models used. A detailed description of code input and code output is also included. The BNW-III code was developed to analyze a specific cooling system layout. However, there is a large degree of freedom in the type of cooling modules that can be selected and in the performance of those modules. The costs of the modules are input to the code, giving the user a great deal of flexibility.

  12. Real-time photoacoustic and ultrasound dual-modality imaging system facilitated with graphics processing unit and code parallel optimization.

    PubMed

    Yuan, Jie; Xu, Guan; Yu, Yao; Zhou, Yu; Carson, Paul L; Wang, Xueding; Liu, Xiaojun

    2013-08-01

    Photoacoustic tomography (PAT) offers structural and functional imaging of living biological tissue with highly sensitive optical absorption contrast and excellent spatial resolution comparable to medical ultrasound (US) imaging. We report the development of a fully integrated PAT and US dual-modality imaging system, which performs signal scanning, image reconstruction, and display for both photoacoustic (PA) and US imaging all in a truly real-time manner. The back-projection (BP) algorithm for PA image reconstruction is optimized to reduce the computational cost and facilitate parallel computation on a state of the art graphics processing unit (GPU) card. For the first time, PAT and US imaging of the same object can be conducted simultaneously and continuously, at a real-time frame rate, presently limited by the laser repetition rate of 10 Hz. Noninvasive PAT and US imaging of human peripheral joints in vivo were achieved, demonstrating the satisfactory image quality realized with this system. Another experiment, simultaneous PAT and US imaging of contrast agent flowing through an artificial vessel, was conducted to verify the performance of this system for imaging fast biological events. The GPU-based image reconstruction software code for this dual-modality system is open source and available for download from http://sourceforge.net/projects/patrealtime.

  13. A real-time photoacoustic and ultrasound dual-modality imaging system facilitated with GPU and code parallel optimization

    NASA Astrophysics Data System (ADS)

    Yuan, Jie; Xu, Guan; Yu, Yao; Zhou, Yu; Carson, Paul L.; Wang, Xueding; Liu, Xiaojun

    2014-03-01

    Photoacoustic tomography (PAT) offers structural and functional imaging of living biological tissue with highly sensitive optical absorption contrast and excellent spatial resolution comparable to medical ultrasound (US) imaging. We report the development of a fully integrated PAT and US dual-modality imaging system, which performs signal scanning, image reconstruction and display for both photoacoustic (PA) and US imaging all in a truly real-time manner. The backprojection (BP) algorithm for PA image reconstruction is optimized to reduce the computational cost and facilitate parallel computation on a state of the art graphics processing unit (GPU) card. For the first time, PAT and US imaging of the same object can be conducted simultaneously and continuously, at a real time frame rate, presently limited by the laser repetition rate of 10 Hz. Noninvasive PAT and US imaging of human peripheral joints in vivo were achieved, demonstrating the satisfactory image quality realized with this system. Another experiment, simultaneous PAT and US imaging of contrast agent flowing through an artificial vessel was conducted to verify the performance of this system for imaging fast biological events. The GPU based image reconstruction software code for this dual-modality system is open source and available for download from http://sourceforge.net/projects/pat realtime .

  14. Real-time photoacoustic and ultrasound dual-modality imaging system facilitated with graphics processing unit and code parallel optimization

    NASA Astrophysics Data System (ADS)

    Yuan, Jie; Xu, Guan; Yu, Yao; Zhou, Yu; Carson, Paul L.; Wang, Xueding; Liu, Xiaojun

    2013-08-01

    Photoacoustic tomography (PAT) offers structural and functional imaging of living biological tissue with highly sensitive optical absorption contrast and excellent spatial resolution comparable to medical ultrasound (US) imaging. We report the development of a fully integrated PAT and US dual-modality imaging system, which performs signal scanning, image reconstruction, and display for both photoacoustic (PA) and US imaging all in a truly real-time manner. The back-projection (BP) algorithm for PA image reconstruction is optimized to reduce the computational cost and facilitate parallel computation on a state of the art graphics processing unit (GPU) card. For the first time, PAT and US imaging of the same object can be conducted simultaneously and continuously, at a real-time frame rate, presently limited by the laser repetition rate of 10 Hz. Noninvasive PAT and US imaging of human peripheral joints in vivo were achieved, demonstrating the satisfactory image quality realized with this system. Another experiment, simultaneous PAT and US imaging of contrast agent flowing through an artificial vessel, was conducted to verify the performance of this system for imaging fast biological events. The GPU-based image reconstruction software code for this dual-modality system is open source and available for download from http://sourceforge.net/projects/patrealtime.

  15. Optimizing Excited-State Electronic-Structure Codes for Intel Knights Landing: A Case Study on the BerkeleyGW Software

    SciTech Connect

    Deslippe, Jack; da Jornada, Felipe H.; Vigil-Fowler, Derek; Barnes, Taylor; Wichmann, Nathan; Raman, Karthik; Sasanka, Ruchira; Louie, Steven G.

    2016-10-06

    We profile and optimize calculations performed with the BerkeleyGW code on the Xeon-Phi architecture. BerkeleyGW depends both on hand-tuned critical kernels as well as on BLAS and FFT libraries. We describe the optimization process and performance improvements achieved. We discuss a layered parallelization strategy to take advantage of vector, thread and node-level parallelism. We discuss locality changes (including the consequence of the lack of L3 cache) and effective use of the on-package high-bandwidth memory. We show preliminary results on Knights-Landing including a roofline study of code performance before and after a number of optimizations. We find that the GW method is particularly well-suited for many-core architectures due to the ability to exploit a large amount of parallelism over plane-wave components, band-pairs, and frequencies.

  16. Kinetic Simulations of Dense Plasma Focus Breakdown

    NASA Astrophysics Data System (ADS)

    Schmidt, A.; Higginson, D. P.; Jiang, S.; Link, A.; Povilus, A.; Sears, J.; Bennett, N.; Rose, D. V.; Welch, D. R.

    2015-11-01

    A dense plasma focus (DPF) device is a type of plasma gun that drives current through a set of coaxial electrodes to assemble gas inside the device and then implode that gas on axis to form a Z-pinch. This implosion drives hydrodynamic and kinetic instabilities that generate strong electric fields, which produces a short intense pulse of x-rays, high-energy (>100 keV) electrons and ions, and (in deuterium gas) neutrons. A strong factor in pinch performance is the initial breakdown and ionization of the gas along the insulator surface separating the two electrodes. The smoothness and isotropy of this ionized sheath are imprinted on the current sheath that travels along the electrodes, thus making it an important portion of the DPF to both understand and optimize. Here we use kinetic simulations in the Particle-in-cell code LSP to model the breakdown. Simulations are initiated with neutral gas and the breakdown modeled self-consistently as driven by a charged capacitor system. We also investigate novel geometries for the insulator and electrodes to attempt to control the electric field profile. The initial ionization fraction of gas is explored computationally to gauge possible advantages of pre-ionization which could be created experimentally via lasers or a glow-discharge. Prepared by LLNL under Contract DE-AC52-07NA27344.

  17. Optimized and secure technique for multiplexing QR code images of single characters: application to noiseless messages retrieval

    NASA Astrophysics Data System (ADS)

    Trejos, Sorayda; Fredy Barrera, John; Torroba, Roberto

    2015-08-01

    We present for the first time an optical encrypting-decrypting protocol for recovering messages without speckle noise. This is a digital holographic technique using a 2f scheme to process QR codes entries. In the procedure, letters used to compose eventual messages are individually converted into a QR code, and then each QR code is divided into portions. Through a holographic technique, we store each processed portion. After filtering and repositioning, we add all processed data to create a single pack, thus simplifying the handling and recovery of multiple QR code images, representing the first multiplexing procedure applied to processed QR codes. All QR codes are recovered in a single step and in the same plane, showing neither cross-talk nor noise problems as in other methods. Experiments have been conducted using an interferometric configuration and comparisons between unprocessed and recovered QR codes have been performed, showing differences between them due to the involved processing. Recovered QR codes can be successfully scanned, thanks to their noise tolerance. Finally, the appropriate sequence in the scanning of the recovered QR codes brings a noiseless retrieved message. Additionally, to procure maximum security, the multiplexed pack could be multiplied by a digital diffuser as to encrypt it. The encrypted pack is easily decoded by multiplying the multiplexing with the complex conjugate of the diffuser. As it is a digital operation, no noise is added. Therefore, this technique is threefold robust, involving multiplexing, encryption, and the need of a sequence to retrieve the outcome.

  18. FAST GYROSYNCHROTRON CODES

    SciTech Connect

    Fleishman, Gregory D.; Kuznetsov, Alexey A.

    2010-10-01

    Radiation produced by charged particles gyrating in a magnetic field is highly significant in the astrophysics context. Persistently increasing resolution of astrophysical observations calls for corresponding three-dimensional modeling of the radiation. However, available exact equations are prohibitively slow in computing a comprehensive table of high-resolution models required for many practical applications. To remedy this situation, we develop approximate gyrosynchrotron (GS) codes capable of quickly calculating the GS emission (in non-quantum regime) from both isotropic and anisotropic electron distributions in non-relativistic, mildly relativistic, and ultrarelativistic energy domains applicable throughout a broad range of source parameters including dense or tenuous plasmas and weak or strong magnetic fields. The computation time is reduced by several orders of magnitude compared with the exact GS algorithm. The new algorithm performance can gradually be adjusted to the user's needs depending on whether precision or computation speed is to be optimized for a given model. The codes are made available for users as a supplement to this paper.

  19. Dense gas flow in minimum length nozzles

    SciTech Connect

    Aldo, A.C.; Argrow, B.M.

    1995-06-01

    Recently, dense gases have been investigated for many engineering applications such as for turbomachinery and wind tunnels. Supersonic nozzle design can be complicated by nonclassical dense-gas behavior in the transonic flow regime. In this paper, a method of characteristics (MOC) is developed for two-dimensional (planar) and axisymmetric flow of a van der Waals gas. A minimum length nozzle design code is developed that employs the MOC procedure to generate an inviscid wall contour. The van der Waals results are compared to perfect gas results to show the real-gas effects on the flow properties and inviscid wall contours.

  20. Optimization of Grit-Blasting Process Parameters for Production of Dense Coatings on Open Pores Metallic Foam Substrates Using Statistical Methods

    NASA Astrophysics Data System (ADS)

    Salavati, S.; Coyle, T. W.; Mostaghimi, J.

    2015-10-01

    Open pore metallic foam core sandwich panels prepared by thermal spraying of a coating on the foam structures can be used as high-efficiency heat transfer devices due to their high surface area to volume ratio. The structural, mechanical, and physical properties of thermally sprayed skins play a significant role in the performance of the related devices. These properties are mainly controlled by the porosity content, oxide content, adhesion strength, and stiffness of the deposited coating. In this study, the effects of grit-blasting process parameters on the characteristics of the temporary surface created on the metallic foam substrate and on the twin-wire arc-sprayed alloy 625 coating subsequently deposited on the foam were investigated through response surface methodology. Characterization of the prepared surface and sprayed coating was conducted by scanning electron microscopy, roughness measurements, and adhesion testing. Using statistical design of experiments, response surface method, a model was developed to predict the effect of grit-blasting parameters on the surface roughness of the prepared foam and also the porosity content of the sprayed coating. The coating porosity and adhesion strength were found to be determined by the substrate surface roughness, which could be controlled by grit-blasting parameters. Optimization of the grit-blasting parameters was conducted using the fitted model to minimize the porosity content of the coating while maintaining a high adhesion strength.

  1. Eddy current-nulled convex optimized diffusion encoding (EN-CODE) for distortion-free diffusion tensor imaging with short echo times.

    PubMed

    Aliotta, Eric; Moulin, Kévin; Ennis, Daniel B

    2017-04-25

    To design and evaluate eddy current-nulled convex optimized diffusion encoding (EN-CODE) gradient waveforms for efficient diffusion tensor imaging (DTI) that is free of eddy current-induced image distortions. The EN-CODE framework was used to generate diffusion-encoding waveforms that are eddy current-compensated. The EN-CODE DTI waveform was compared with the existing eddy current-nulled twice refocused spin echo (TRSE) sequence as well as monopolar (MONO) and non-eddy current-compensated CODE in terms of echo time (TE) and image distortions. Comparisons were made in simulations, phantom experiments, and neuro imaging in 10 healthy volunteers. The EN-CODE sequence achieved eddy current compensation with a significantly shorter TE than TRSE (78 versus 96 ms) and a slightly shorter TE than MONO (78 versus 80 ms). Intravoxel signal variance was lower in phantoms with EN-CODE than with MONO (13.6 ± 11.6 versus 37.4 ± 25.8) and not different from TRSE (15.1 ± 11.6), indicating good robustness to eddy current-induced image distortions. Mean fractional anisotropy values in brain edges were also significantly lower with EN-CODE than with MONO (0.16 ± 0.01 versus 0.24 ± 0.02, P < 1 x 10(-5) ) and not different from TRSE (0.16 ± 0.01 versus 0.16 ± 0.01, P = nonsignificant). The EN-CODE sequence eliminated eddy current-induced image distortions in DTI with a TE comparable to MONO and substantially shorter than TRSE. Magn Reson Med, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  2. User's manual for DELSOL2: a computer code for calculating the optical performance and optimal system design for solar-thermal central-receiver plants

    SciTech Connect

    Dellin, T.A.; Fish, M.J.; Yang, C.L.

    1981-08-01

    DELSOL2 is a revised and substantially extended version of the DELSOL computer program for calculating collector field performance and layout, and optimal system design for solar thermal central receiver plants. The code consists of a detailed model of the optical performance, a simpler model of the non-optical performance, an algorithm for field layout, and a searching algorithm to find the best system design. The latter two features are coupled to a cost model of central receiver components and an economic model for calculating energy costs. The code can handle flat, focused and/or canted heliostats, and external cylindrical, multi-aperture cavity, and flat plate receivers. The program optimizes the tower height, receiver size, field layout, heliostat spacings, and tower position at user specified power levels subject to flux limits on the receiver and land constraints for field layout. The advantages of speed and accuracy characteristic of Version I are maintained in DELSOL2.

  3. DIANE multiparticle transport code

    NASA Astrophysics Data System (ADS)

    Caillaud, M.; Lemaire, S.; Ménard, S.; Rathouit, P.; Ribes, J. C.; Riz, D.

    2014-06-01

    DIANE is the general Monte Carlo code developed at CEA-DAM. DIANE is a 3D multiparticle multigroup code. DIANE includes automated biasing techniques and is optimized for massive parallel calculations.

  4. Dense high temperature ceramic oxide superconductors

    DOEpatents

    Landingham, R.L.

    1993-10-12

    Dense superconducting ceramic oxide articles of manufacture and methods for producing these articles are described. Generally these articles are produced by first processing these superconducting oxides by ceramic processing techniques to optimize materials properties, followed by reestablishing the superconducting state in a desired portion of the ceramic oxide composite.

  5. Dense high temperature ceramic oxide superconductors

    DOEpatents

    Landingham, Richard L.

    1993-01-01

    Dense superconducting ceramic oxide articles of manufacture and methods for producing these articles are described. Generally these articles are produced by first processing these superconducting oxides by ceramic processing techniques to optimize materials properties, followed by reestablishing the superconducting state in a desired portion of the ceramic oxide composite.

  6. Improvements in accuracy of dense OPC models

    NASA Astrophysics Data System (ADS)

    Kallingal, Chidam; Oberschmidt, James; Viswanathan, Ramya; Abdo, Amr; Park, OSeo

    2008-10-01

    Performing model-based optical proximity correction (MBOPC) on layouts has become an integral part of patterning advanced integrated circuits. Earlier technologies used sparse OPC, the run times of which explode when the density of layouts increases. With the move to 45 nm technology node, this increase in run time has resulted in a shift to dense simulation OPC, which is pixel-based. The dense approach becomes more efficient at 45nm technology node and beyond. New OPC model forms can be used with the dense simulation OPC engine, providing the greater accuracy required by smaller technology nodes. Parameters in the optical model have to be optimized to achieve the required accuracy. Dense OPC uses a resist model with a different set of parameters than sparse OPC. The default search ranges used in the optimization of these resist parameters do not always result in the best accuracy. However, it is possible to improve the accuracy of the resist models by understanding the restrictions placed on the search ranges of the physical parameters during optimization. This paper will present results showing the correlation between accuracy of the models and some of these optical and resist parameters. The results will show that better optimization can improve the model fitness of features in both the calibration and verification set.

  7. Homological stabilizer codes

    SciTech Connect

    Anderson, Jonas T.

    2013-03-15

    In this paper we define homological stabilizer codes on qubits which encompass codes such as Kitaev's toric code and the topological color codes. These codes are defined solely by the graphs they reside on. This feature allows us to use properties of topological graph theory to determine the graphs which are suitable as homological stabilizer codes. We then show that all toric codes are equivalent to homological stabilizer codes on 4-valent graphs. We show that the topological color codes and toric codes correspond to two distinct classes of graphs. We define the notion of label set equivalencies and show that under a small set of constraints the only homological stabilizer codes without local logical operators are equivalent to Kitaev's toric code or to the topological color codes. - Highlights: Black-Right-Pointing-Pointer We show that Kitaev's toric codes are equivalent to homological stabilizer codes on 4-valent graphs. Black-Right-Pointing-Pointer We show that toric codes and color codes correspond to homological stabilizer codes on distinct graphs. Black-Right-Pointing-Pointer We find and classify all 2D homological stabilizer codes. Black-Right-Pointing-Pointer We find optimal codes among the homological stabilizer codes.

  8. Memory-efficient table look-up optimized algorithm for context-based adaptive variable length decoding in H.264/advanced video coding

    NASA Astrophysics Data System (ADS)

    Wang, Jianhua; Cheng, Lianglun; Wang, Tao; Peng, Xiaodong

    2016-03-01

    Table look-up operation plays a very important role during the decoding processing of context-based adaptive variable length decoding (CAVLD) in H.264/advanced video coding (AVC). However, frequent table look-up operation can result in big table memory access, and then lead to high table power consumption. Aiming to solve the problem of big table memory access of current methods, and then reduce high power consumption, a memory-efficient table look-up optimized algorithm is presented for CAVLD. The contribution of this paper lies that index search technology is introduced to reduce big memory access for table look-up, and then reduce high table power consumption. Specifically, in our schemes, we use index search technology to reduce memory access by reducing the searching and matching operations for code_word on the basis of taking advantage of the internal relationship among length of zero in code_prefix, value of code_suffix and code_lengh, thus saving the power consumption of table look-up. The experimental results show that our proposed table look-up algorithm based on index search can lower about 60% memory access consumption compared with table look-up by sequential search scheme, and then save much power consumption for CAVLD in H.264/AVC.

  9. Improvement of Predictive Accuracy on Subchannel Analysis Code (NASCA) for Tight-Lattice Rod Bundle Tests - Optimization of UEDA'S Entrainment Model Parameter and Cross Flow Model Parameters

    SciTech Connect

    Hiromasa Chitose; Akitoshi Hotta; Akira Ohnuki; Ken Fujimura

    2006-07-01

    The Reduced-Moderation Water Reactor (RMWR) is being developed at Japan Atomic Energy Agency and demonstration of the core heat removal performance is one of the most important issues. However, operation of the full-scale bundle experiment is difficult technically because the fuel rod bundle size is larger, which consumes huge electricity. Hence, it is expected to develop an analysis code for simulating RMWR core thermal-hydraulic performance with high accuracy. Subchannel analysis is the most powerful technique to resolve the problem. A subchannel analysis code NASCA (Nuclear-reactor Advanced Sub-Channel Analysis code) has been developed to improve capabilities of analyzing transient two-phase flow phenomena, boiling transition (BT) and post BT, and the NASCA code is applicable on the thermal-hydraulic analysis for the current BWR fuel. In the present study, the prediction accuracy of the NASCA code has been investigated using the reduced-scale rod bundle test data, and its applicability on the RMWR has been improved by optimizing the mechanistic constitutive models. (authors)

  10. Computational electromagnetics and parallel dense matrix computations

    SciTech Connect

    Forsman, K.; Kettunen, L.; Gropp, W.; Levine, D.

    1995-06-01

    We present computational results using CORAL, a parallel, three-dimensional, nonlinear magnetostatic code based on a volume integral equation formulation. A key feature of CORAL is the ability to solve, in parallel, the large, dense systems of linear equations that are inherent in the use of integral equation methods. Using the Chameleon and PSLES libraries ensures portability and access to the latest linear algebra solution technology.

  11. Computational electromagnetics and parallel dense matrix computations

    SciTech Connect

    Forsman, K.; Kettunen, L.; Gropp, W.

    1995-12-01

    We present computational results using CORAL, a parallel, three-dimensional, nonlinear magnetostatic code based on a volume integral equation formulation. A key feature of CORAL is the ability to solve, in parallel, the large, dense systems of linear equations that are inherent in the use of integral equation methods. Using the Chameleon and PSLES libraries ensures portability and access to the latest linear algebra solution technology.

  12. Modernization and optimization of a legacy open-source CFD code for high-performance computing architectures

    DOE PAGES

    Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha; ...

    2017-03-20

    Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, whichmore » is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,‘bottom-up’ and ‘top-down’, are illustrated. Here, preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.« less

  13. Multi-Sensor Detection with Particle Swarm Optimization for Time-Frequency Coded Cooperative WSNs Based on MC-CDMA for Underground Coal Mines.

    PubMed

    Xu, Jingjing; Yang, Wei; Zhang, Linyuan; Han, Ruisong; Shao, Xiaotao

    2015-08-27

    In this paper, a wireless sensor network (WSN) technology adapted to underground channel conditions is developed, which has important theoretical and practical value for safety monitoring in underground coal mines. According to the characteristics that the space, time and frequency resources of underground tunnel are open, it is proposed to constitute wireless sensor nodes based on multicarrier code division multiple access (MC-CDMA) to make full use of these resources. To improve the wireless transmission performance of source sensor nodes, it is also proposed to utilize cooperative sensors with good channel conditions from the sink node to assist source sensors with poor channel conditions. Moreover, the total power of the source sensor and its cooperative sensors is allocated on the basis of their channel conditions to increase the energy efficiency of the WSN. To solve the problem that multiple access interference (MAI) arises when multiple source sensors transmit monitoring information simultaneously, a kind of multi-sensor detection (MSD) algorithm with particle swarm optimization (PSO), namely D-PSO, is proposed for the time-frequency coded cooperative MC-CDMA WSN. Simulation results show that the average bit error rate (BER) performance of the proposed WSN in an underground coal mine is improved significantly by using wireless sensor nodes based on MC-CDMA, adopting time-frequency coded cooperative transmission and D-PSO algorithm with particle swarm optimization.

  14. Multi-Sensor Detection with Particle Swarm Optimization for Time-Frequency Coded Cooperative WSNs Based on MC-CDMA for Underground Coal Mines

    PubMed Central

    Xu, Jingjing; Yang, Wei; Zhang, Linyuan; Han, Ruisong; Shao, Xiaotao

    2015-01-01

    In this paper, a wireless sensor network (WSN) technology adapted to underground channel conditions is developed, which has important theoretical and practical value for safety monitoring in underground coal mines. According to the characteristics that the space, time and frequency resources of underground tunnel are open, it is proposed to constitute wireless sensor nodes based on multicarrier code division multiple access (MC-CDMA) to make full use of these resources. To improve the wireless transmission performance of source sensor nodes, it is also proposed to utilize cooperative sensors with good channel conditions from the sink node to assist source sensors with poor channel conditions. Moreover, the total power of the source sensor and its cooperative sensors is allocated on the basis of their channel conditions to increase the energy efficiency of the WSN. To solve the problem that multiple access interference (MAI) arises when multiple source sensors transmit monitoring information simultaneously, a kind of multi-sensor detection (MSD) algorithm with particle swarm optimization (PSO), namely D-PSO, is proposed for the time-frequency coded cooperative MC-CDMA WSN. Simulation results show that the average bit error rate (BER) performance of the proposed WSN in an underground coal mine is improved significantly by using wireless sensor nodes based on MC-CDMA, adopting time-frequency coded cooperative transmission and D-PSO algorithm with particle swarm optimization. PMID:26343660

  15. Optimal speckle suppression in laser projectors using a single two-dimensional Barker code diffractive optical element.

    PubMed

    Lapchuk, Anatoliy; Kryuchyn, Andriy; Petrov, Vyacheslav; Klymenko, Volodymyr

    2013-02-01

    An effective method of speckle suppression using one 2D diffractive optical element (DOE) moving with constant velocity based on the periodic Barker code sequence is developed. We prove that this method has the same optical parameters as the method based on two 1D Barker code DOEs stretched and moving in orthogonal directions. We also show that DOE movement in a special direction allows the full numerical aperture of the objective lens to be used for speckle averaging by angle diversity. It is found that the 2D DOE based on a Barker code of length of 13 allows the speckle contrast to be decreased below the sensitivity of the human eye with optical losses of less than 10%.

  16. Optimization and parallelization of the thermal–hydraulic subchannel code CTF for high-fidelity multi-physics applications

    DOE PAGES

    Salko, Robert K.; Schmidt, Rodney C.; Avramova, Maria N.

    2014-11-23

    This study describes major improvements to the computational infrastructure of the CTF subchannel code so that full-core, pincell-resolved (i.e., one computational subchannel per real bundle flow channel) simulations can now be performed in much shorter run-times, either in stand-alone mode or as part of coupled-code multi-physics calculations. These improvements support the goals of the Department Of Energy Consortium for Advanced Simulation of Light Water Reactors (CASL) Energy Innovation Hub to develop high fidelity multi-physics simulation tools for nuclear energy design and analysis.

  17. Optimization of a DPP-BOTDA sensor with 25 cm spatial resolution over 60 km standard single-mode fiber using Simplex codes and optical pre-amplification.

    PubMed

    Soto, Marcelo A; Taki, Mohammad; Bolognini, Gabriele; Di Pasquale, Fabrizio

    2012-03-26

    Sub-meter distributed optical fiber sensing based on Brillouin optical time-domain analysis with differential pulse-width pairs (DPP-BOTDA) is combined with the use of optical pre-amplification and pulse coding. In order to provide significant measurement SNR enhancement and to avoid distortions in the Brillouin gain spectrum due to acoustic-wave pre-excitation, the pulse width and duty cycle of Simplex coding based on return-to-zero pulses are optimized through simulations. In addition, the use of linear optical pre-amplification increases the receiver sensitivity and the overall dynamic range of DPP-BOTDA measurements. Experimental results demonstrate for first time a spatial resolution of ~25 cm over a 60 km standard single-mode fiber (equivalent to ~240 k discrete sensing points) with temperature resolution of 1.2°C and strain resolution of 24 με.

  18. Fragility in dense suspensions

    NASA Astrophysics Data System (ADS)

    Mari, Romain; Cates, Mike

    Dense suspensions can jam under shear when the volume fraction of solid material is large enough. In this work we investigate the mechanical properties of shear jammed suspensions with numerical simulations. In particular, we address the issue of the fragility of these systems, i.e., the type of mechanical response (elastic or plastic) they show when subject to a mechanical load differing from the one applied during their preparation history.

  19. A New Wavelength Optimization and Energy-Saving Scheme Based on Network Coding in Software-Defined WDM-PON Networks

    NASA Astrophysics Data System (ADS)

    Ren, Danping; Wu, Shanshan; Zhang, Lijing

    2016-09-01

    In view of the characteristics of the global control and flexible monitor of software-defined networks (SDN), we proposes a new optical access network architecture dedicated to Wavelength Division Multiplexing-Passive Optical Network (WDM-PON) systems based on SDN. The network coding (NC) technology is also applied into this architecture to enhance the utilization of wavelength resource and reduce the costs of light source. Simulation results show that this scheme can optimize the throughput of the WDM-PON network, greatly reduce the system time delay and energy consumption.

  20. Are energy dense diets also nutrient dense?

    PubMed

    Nicklas, Theresa A; O'Neil, Carol E; Mendoza, Jason; Liu, Yan; Zakeri, Issa F; Berenson, Gerald S

    2008-10-01

    Some beverages are nutrient dense, but they are often excluded from nutrient density calculations. The purpose of this study was to assess whether the energy-nutrient association changed when beverages were included in these calculations. Applying a cross-sectional design, a 24-hour dietary recall was collected on each participant. Subjects/ 440 young adults (ages 19-28 years) in Bogalusa, Louisiana participated in this study. Mean nutrient intakes and food group consumption were examined across the energy density (ED) tertiles using two calculation methods: one with food and all beverages (excluding water) (ED1) and one including food and only energy containing beverages (ED2). Regression models were used and multiple comparisons were performed using the Tukey-Kramer procedure. A p-value < 0.05 was considered to be significant. With increasing ED, there was a significant increase in the consumption of total meats (ED1 p < 0.05; ED2 p < 0.01). In contrast, there was a significant decrease in consumption of fruits/juices (ED1 p < 0.01; ED2 p < 0.0001), vegetables (ED1 p < 0.01; ED2 p < 0.05), beverages (both p < 0.0001) and total sweets with increasing ED (both p < 0.0001). There was a significantly higher mean intake of total protein (grams) (ED2 p < 0.0001), amino acids (ED1 histidine/leucine p < 0.05; ED2 p < 0.0001), and total fat (grams) (ED1 p < 0.0001; ED2 p < 0.0001) with higher ED compared to lower ED. The percent energy from protein (ED1 p < 0.05; ED2 p < 0.0001), total fat (both p < 0.001) and saturated fatty acids (both p < 0.0001) significantly increased and the percent energy from carbohydrate (both p < 0.0001) and sucrose (both p < 0.0001) significantly decreased with increasing ED. This study suggests that ED may influence the ND of the diet depending on whether energy containing beverages are included or excluded in the analysis.

  1. Are Energy Dense Diets Also Nutrient Dense?

    PubMed Central

    Nicklas, Theresa A.; O’Neil, Carol E.; Mendoza, Jason; Liu, Yan; Zakeri, Issa F.; Berenson, Gerald S.

    2009-01-01

    Objective Some beverages are nutrient dense, but they are often excluded from nutrient density calculations. The purpose of this study was to assess whether the energy-nutrient association changed when beverages were included in these calculations. Design Applying a cross-sectional design, a 24-hour dietary recall was collected on each participant. Subjects/Setting 440 young adults (ages 19–28 years) in Bogalusa, Louisiana participated in this study. Statistical Analysis Mean nutrient intakes and food group consumption were examined across the energy density (ED) tertiles using two calculation methods: one with food and all beverages (excluding water) (ED1) and one including food and only energy containing beverages (ED2). Regression models were used and multiple comparisons were performed using the Tukey-Kramer procedure. A p-value < 0.05 was considered to be significant. Results With increasing ED, there was a significant increase in the consumption of total meats (ED1 p < 0.05; ED2 p < 0.01). In contrast, there was a significant decrease in consumption of fruits/juices (ED1 p < 0.01; ED2 p < 0.0001), vegetables (ED1 p < 0.01; ED2 p < 0.05), beverages (both p < 0.0001) and total sweets with increasing ED (both p < 0.0001). There was a significantly higher mean intake of total protein (grams) (ED2 p < 0.0001), amino acids (ED1 histidine/leucine p < 0.05; ED2 p < 0.0001), and total fat (grams) (ED1 p < 0.0001; ED2 p < 0.0001) with higher ED compared to lower ED. The percent energy from protein (ED1 p < 0.05; ED2 p < 0.0001), total fat (both p < 0.001) and saturated fatty acids (both p < 0.0001) significantly increased and the percent energy from carbohydrate (both p < 0.0001) and sucrose (both p < 0.0001) significantly decreased with increasing ED. Conclusion This study suggests that ED may influence the ND of the diet depending on whether energy containing beverages are included or excluded in the analysis. PMID:18845705

  2. Optimization of high-definition video coding and hybrid fiber-wireless transmission in the 60 GHz band.

    PubMed

    Lebedev, Alexander; Pham, Tien Thang; Beltrán, Marta; Yu, Xianbin; Ukhanova, Anna; Llorente, Roberto; Monroy, Idelfonso Tafur; Forchhammer, Søren

    2011-12-12

    The paper addresses the problem of distribution of high-definition video over fiber-wireless networks. The physical layer architecture with the low complexity envelope detection solution is investigated. We present both experimental studies and simulation of high quality high-definition compressed video transmission over 60 GHz fiber-wireless link. Using advanced video coding we satisfy low complexity and low delay constraints, meanwhile preserving the superb video quality after significantly extended wireless distance.

  3. Dense matter at RAON: Challenges and possibilities

    NASA Astrophysics Data System (ADS)

    Lee, Yujeong; Lee, Chang-Hwan; Gaitanos, T.; Kim, Youngman

    2016-11-01

    Dense nuclear matter is ubiquitous in modern nuclear physics because it is related to many interesting microscopic and macroscopic phenomena such as heavy ion collisions, nuclear structure, and neutron stars. The on-going rare isotope science project in Korea will build up a rare isotope accelerator complex called RAON. One of the main goals of RAON is to investigate rare isotope physics including dense nuclear matter. Using the relativistic Boltzmann-Uehling-Uhlenbeck (RBUU) transport code, we estimate the properties of nuclear matter that can be created from low-energy heavyion collisions at RAON.We give predictions for the maximum baryon density, the isospin asymmetry and the temperature of nuclear matter that would be formed during 197Au+197Au and 132Sn+64Ni reactions. With a large isospin asymmetry, various theoretical studies indicate that the critical densities or temperatures of phase transitions to exotic states decrease. Because a large isospin asymmetry is expected in the dense matter created at RAON, we discuss possibilities of observing exotic states of dense nuclear matter at RAON for large isospin asymmetry.

  4. Dense cold baryonic matter

    NASA Astrophysics Data System (ADS)

    Stavinskiy, A. V.

    2017-09-01

    A possibility of studying cold nuclear matter on the Nuclotron-NICA facility at baryonic densities characteristic of and higher than at the center of a neutron star is considered based on the data from cumulative processes. A special rare-event kinematic trigger for collisions of relativistic ions is proposed for effective selection of events accompanied by production of dense baryonic systems. Possible manifestations of new matter states under these unusual conditions and an experimental program for their study are discussed. Various experimental setups are proposed for these studies, and a possibility of using experimental setups at the Nuclotron-NICA facility for this purpose is considered.

  5. Validation of a computer code for analysis of subsonic aerodynamic performance of wings with flaps in combination with a canard or horizontal tail and an application to optimization

    NASA Technical Reports Server (NTRS)

    Carlson, Harry W.; Darden, Christine M.; Mann, Michael J.

    1990-01-01

    Extensive correlations of computer code results with experimental data are employed to illustrate the use of a linearized theory, attached flow method for the estimation and optimization of the longitudinal aerodynamic performance of wing-canard and wing-horizontal tail configurations which may employ simple hinged flap systems. Use of an attached flow method is based on the premise that high levels of aerodynamic efficiency require a flow that is as nearly attached as circumstances permit. The results indicate that linearized theory, attached flow, computer code methods (modified to include estimated attainable leading-edge thrust and an approximate representation of vortex forces) provide a rational basis for the estimation and optimization of aerodynamic performance at subsonic speeds below the drag rise Mach number. Generally, good prediction of aerodynamic performance, as measured by the suction parameter, can be expected for near optimum combinations of canard or horizontal tail incidence and leading- and trailing-edge flap deflections at a given lift coefficient (conditions which tend to produce a predominantly attached flow).

  6. Validation of a pair of computer codes for estimation and optimization of subsonic aerodynamic performance of simple hinged-flap systems for thin swept wings

    NASA Technical Reports Server (NTRS)

    Carlson, Harry W.; Darden, Christine M.

    1988-01-01

    Extensive correlations of computer code results with experimental data are employed to illustrate the use of linearized theory attached flow methods for the estimation and optimization of the aerodynamic performance of simple hinged flap systems. Use of attached flow methods is based on the premise that high levels of aerodynamic efficiency require a flow that is as nearly attached as circumstances permit. A variety of swept wing configurations are considered ranging from fighters to supersonic transports, all with leading- and trailing-edge flaps for enhancement of subsonic aerodynamic efficiency. The results indicate that linearized theory attached flow computer code methods provide a rational basis for the estimation and optimization of flap system aerodynamic performance at subsonic speeds. The analysis also indicates that vortex flap design is not an opposing approach but is closely related to attached flow design concepts. The successful vortex flap design actually suppresses the formation of detached vortices to produce a small vortex which is restricted almost entirely to the leading edge flap itself.

  7. A user's manual for DELSOL3: A computer code for calculating the optical performance and optimal system design for solar thermal central receiver plants

    SciTech Connect

    Kistler, B.L.

    1986-11-01

    DELSOL3 is a revised and updated version of the DELSOL2 computer program (SAND81-8237) for calculating collector field performance and layout and optimal system design for solar thermal central receiver plants. The code consists of a detailed model of the optical performance, a simpler model of the non-optical performance, an algorithm for field layout, and a searching algorithm to find the best system design based on energy cost. The latter two features are coupled to a cost model of central receiver components and an economic model for calculating energy costs. The code can handle flat, focused and/or canted heliostats, and external cylindrical, multi-aperture cavity, and flat plate receivers. The program optimizes the tower height, receiver size, field layout, heliostat spacings, and tower position at user specified power levels subject to flux limits on the receiver and land constraints for field layout. DELSOL3 maintains the advantages of speed and accuracy which are characteristics of DELSOL2.

  8. Optimization and Parallelization of the Thermal-Hydraulic Sub-channel Code CTF for High-Fidelity Multi-physics Applications

    SciTech Connect

    Salko, Robert K; Schmidt, Rodney; Avramova, Maria N

    2014-01-01

    This paper describes major improvements to the computational infrastructure of the CTF sub-channel code so that full-core sub-channel-resolved simulations can now be performed in much shorter run-times, either in stand-alone mode or as part of coupled-code multi-physics calculations. These improvements support the goals of the Department Of Energy (DOE) Consortium for Advanced Simulations of Light Water (CASL) Energy Innovation Hub to develop high fidelity multi-physics simulation tools for nuclear energy design and analysis. A set of serial code optimizations--including fixing computational inefficiencies, optimizing the numerical approach, and making smarter data storage choices--are first described and shown to reduce both execution time and memory usage by about a factor of ten. Next, a Single Program Multiple Data (SPMD) parallelization strategy targeting distributed memory Multiple Instruction Multiple Data (MIMD) platforms and utilizing domain-decomposition is presented. In this approach, data communication between processors is accomplished by inserting standard MPI calls at strategic points in the code. The domain decomposition approach implemented assigns one MPI process to each fuel assembly, with each domain being represented by its own CTF input file. The creation of CTF input files, both for serial and parallel runs, is also fully automated through use of a pre-processor utility that takes a greatly reduced set of user input over the traditional CTF input file. To run CTF in parallel, two additional libraries are currently needed; MPI, for inter-processor message passing, and the Parallel Extensible Toolkit for Scientific Computation (PETSc), which is leveraged to solve the global pressure matrix in parallel. Results presented include a set of testing and verification calculations and performance tests assessing parallel scaling characteristics up to a full core, sub-channel-resolved model of Watts Bar Unit 1 under hot full-power conditions (193 17x17

  9. Dense Axion Stars

    NASA Astrophysics Data System (ADS)

    Mohapatra, Abhishek; Braaten, Eric; Zhang, Hong

    2016-03-01

    If the dark matter consists of axions, gravity can cause them to coalesce into axion stars, which are stable gravitationally bound Bose-Einstein condensates of axions. In the previously known axion stars, gravity and the attractive force between pairs of axions are balanced by the kinetic pressure. If the axion mass energy is mc2 =10-4 eV, these dilute axion stars have a maximum mass of about 10-14M⊙ . We point out that there are also dense axion stars in which gravity is balanced by the mean-field pressure of the axion condensate. We study axion stars using the leading term in a systematically improvable approximation to the effective potential of the nonrelativistic effective field theory for axions. Using the Thomas-Fermi approximation in which the kinetic pressure is neglected, we find a sequence of new branches of axion stars in which gravity is balanced by the mean-field interaction energy of the axion condensate. If mc2 =10-4 4 eV, the first branch of these dense axion stars has mass ranging from about 10-11M⊙ toabout M⊙.

  10. Dense Axion Stars.

    PubMed

    Braaten, Eric; Mohapatra, Abhishek; Zhang, Hong

    2016-09-16

    If the dark matter particles are axions, gravity can cause them to coalesce into axion stars, which are stable gravitationally bound systems of axions. In the previously known solutions for axion stars, gravity and the attractive force between pairs of axions are balanced by the kinetic pressure. The mass of these dilute axion stars cannot exceed a critical mass, which is about 10^{-14}M_{⊙} if the axion mass is 10^{-4}  eV. We study axion stars using a simple approximation to the effective potential of the nonrelativistic effective field theory for axions. We find a new branch of dense axion stars in which gravity is balanced by the mean-field pressure of the axion Bose-Einstein condensate. The mass on this branch ranges from about 10^{-20}M_{⊙} to about M_{⊙}. If a dilute axion star with the critical mass accretes additional axions and collapses, it could produce a bosenova, leaving a dense axion star as the remnant.

  11. Dense suspension splash

    NASA Astrophysics Data System (ADS)

    Dodge, Kevin M.; Peters, Ivo R.; Ellowitz, Jake; Schaarsberg, Martin H. Klein; Jaeger, Heinrich M.; Zhang, Wendy W.

    2014-11-01

    Impact of a dense suspension drop onto a solid surface at speeds of several meters-per-second splashes by ejecting individual liquid-coated particles. Suppression or reduction of this splash is important for thermal spray coating and additive manufacturing. Accomplishing this aim requires distinguishing whether the splash is generated by individual scattering events or by collective motion reminiscent of liquid flow. Since particle inertia dominates over surface tension and viscous drag in a strong splash, we model suspension splash using a discrete-particle simulation in which the densely packed macroscopic particles experience inelastic collisions but zero friction or cohesion. Numerical results based on this highly simplified model are qualitatively consistent with observations. They also show that approximately 70% of the splash is generated by collective motion. Here an initially downward-moving particle is ejected into the splash because it experiences a succession of low-momentum-change collisions whose effects do not cancel but instead accumulate. The remainder of the splash is generated by scattering events in which a small number of high-momentum-change collisions cause a particle to be ejected upwards. Current Address: Physics of Fluids Group, University of Twente, P.O. Box 217, 7500 AE Enschede, The Netherlands.

  12. Dense Suspension Splash

    NASA Astrophysics Data System (ADS)

    Zhang, Wendy; Dodge, Kevin M.; Peters, Ivo R.; Ellowitz, Jake; Klein Schaarsberg, Martin H.; Jaeger, Heinrich M.

    2014-03-01

    Upon impact onto a solid surface at several meters-per-second, a dense suspension plug splashes by ejecting liquid-coated particles. We study the mechanism for splash formation using experiments and a numerical model. In the model, the dense suspension is idealized as a collection of cohesionless, rigid grains with finite surface roughness. The grains also experience lubrication drag as they approach, collide inelastically and rebound away from each other. Simulations using this model reproduce the measured momentum distribution of ejected particles. They also provide direct evidence supporting the conclusion from earlier experiments that inelastic collisions, rather than viscous drag, dominate when the suspension contains macroscopic particles immersed in a low-viscosity solvent such as water. Finally, the simulations reveal two distinct routes for splash formation: a particle can be ejected by a single high momentum-change collision. More surprisingly, a succession of small momentum-change collisions can accumulate to eject a particle outwards. Supported by NSF through its MRSEC program (DMR-0820054) and fluid dynamics program (CBET-1336489).

  13. Dense Axion Stars

    NASA Astrophysics Data System (ADS)

    Braaten, Eric; Mohapatra, Abhishek; Zhang, Hong

    2016-09-01

    If the dark matter particles are axions, gravity can cause them to coalesce into axion stars, which are stable gravitationally bound systems of axions. In the previously known solutions for axion stars, gravity and the attractive force between pairs of axions are balanced by the kinetic pressure. The mass of these dilute axion stars cannot exceed a critical mass, which is about 10-14M⊙ if the axion mass is 10-4 eV . We study axion stars using a simple approximation to the effective potential of the nonrelativistic effective field theory for axions. We find a new branch of dense axion stars in which gravity is balanced by the mean-field pressure of the axion Bose-Einstein condensate. The mass on this branch ranges from about 10-20M⊙ to about M⊙ . If a dilute axion star with the critical mass accretes additional axions and collapses, it could produce a bosenova, leaving a dense axion star as the remnant.

  14. Warm dense crystallography

    NASA Astrophysics Data System (ADS)

    Valenza, Ryan A.; Seidler, Gerald T.

    2016-03-01

    The intense femtosecond-scale pulses from x-ray free electron lasers (XFELs) are able to create and interrogate interesting states of matter characterized by long-lived nonequilibrium semicore or core electron occupancies or by the heating of dense phases via the relaxation cascade initiated by the photoelectric effect. We address here the latter case of "warm dense matter" (WDM) and investigate the observable consequences of x-ray heating of the electronic degrees of freedom in crystalline systems. We report temperature-dependent density functional theory calculations for the x-ray diffraction from crystalline LiF, graphite, diamond, and Be. We find testable, strong signatures of condensed-phase effects that emphasize the importance of wide-angle scattering to study nonequilibrium states. These results also suggest that the reorganization of the valence electron density at eV-scale temperatures presents a confounding factor to achieving atomic resolution in macromolecular serial femtosecond crystallography (SFX) studies at XFELs, as performed under the "diffract before destroy" paradigm.

  15. Towards the optimization of a gyrokinetic Particle-In-Cell (PIC) code on large-scale hybrid architectures

    NASA Astrophysics Data System (ADS)

    Ohana, N.; Jocksch, A.; Lanti, E.; Tran, T. M.; Brunner, S.; Gheller, C.; Hariri, F.; Villard, L.

    2016-11-01

    With the aim of enabling state-of-the-art gyrokinetic PIC codes to benefit from the performance of recent multithreaded devices, we developed an application from a platform called the “PIC-engine” [1, 2, 3] embedding simplified basic features of the PIC method. The application solves the gyrokinetic equations in a sheared plasma slab using B-spline finite elements up to fourth order to represent the self-consistent electrostatic field. Preliminary studies of the so-called Particle-In-Fourier (PIF) approach, which uses Fourier modes as basis functions in the periodic dimensions of the system instead of the real-space grid, show that this method can be faster than PIC for simulations with a small number of Fourier modes. Similarly to the PIC-engine, multiple levels of parallelism have been implemented using MPI+OpenMP [2] and MPI+OpenACC [1], the latter exploiting the computational power of GPUs without requiring complete code rewriting. It is shown that sorting particles [3] can lead to performance improvement by increasing data locality and vectorizing grid memory access. Weak scalability tests have been successfully run on the GPU-equipped Cray XC30 Piz Daint (at CSCS) up to 4,096 nodes. The reduced time-to-solution will enable more realistic and thus more computationally intensive simulations of turbulent transport in magnetic fusion devices.

  16. Theory and Simulation of Warm Dense Matter Targets

    SciTech Connect

    Barnard, J J; Armijo, J; More, R M; Friedman, A; Kaganovich, I; Logan, B G; Marinak, M M; Penn, G E; Sefkow, A B; Santhanam, P; Wurtele, J S

    2006-07-13

    We present simulations and analysis of the heating of warm dense matter foils by ion beams with ion energy less than one MeV per nucleon to target temperatures of order one eV. Simulations were carried out using the multi-physics radiation hydrodynamics code HYDRA and comparisons are made with analysis and the code DPC. We simulate possible targets for a proposed experiment at LBNL (the so-called Neutralized Drift Compression Experiment, NDCXII) for studies of warm dense matter. We compare the dynamics of ideally heated targets, under several assumed equation of states, exploring dynamics in the two-phase (fluid-vapor) regime.

  17. BUMPERII - DESIGN ANALYSIS CODE FOR OPTIMIZING SPACECRAFT SHIELDING AND WALL CONFIGURATION FOR ORBITAL DEBRIS AND METEOROID IMPACTS

    NASA Technical Reports Server (NTRS)

    Hill, S. A.

    1994-01-01

    BUMPERII is a modular program package employing a numerical solution technique to calculate a spacecraft's probability of no penetration (PNP) from man-made orbital debris or meteoroid impacts. The solution equation used to calculate the PNP is based on the Poisson distribution model for similar analysis of smaller craft, but reflects the more rigorous mathematical modeling of spacecraft geometry, orientation, and impact characteristics necessary for treatment of larger structures such as space station components. The technique considers the spacecraft surface in terms of a series of flat plate elements. It divides the threat environment into a number of finite cases, then evaluates each element of each threat. The code allows for impact shielding (shadowing) of one element by another in various configurations over the spacecraft exterior, and also allows for the effects of changing spacecraft flight orientation and attitude. Four main modules comprise the overall BUMPERII package: GEOMETRY, RESPONSE, SHIELD, and CONTOUR. The GEOMETRY module accepts user-generated finite element model (FEM) representations of the spacecraft geometry and creates geometry databases for both meteoroid and debris analysis. The GEOMETRY module expects input to be in either SUPERTAB Universal File Format or PATRAN Neutral File Format. The RESPONSE module creates wall penetration response databases, one for meteoroid analysis and one for debris analysis, for up to 100 unique wall configurations. This module also creates a file containing critical diameter as a function of impact velocity and impact angle for each wall configuration. The SHIELD module calculates the PNP for the modeled structure given exposure time, operating altitude, element ID ranges, and the data from the RESPONSE and GEOMETRY databases. The results appear in a summary file. SHIELD will also determine the effective area of the components and the overall model, and it can produce a data file containing the probability

  18. BUMPERII - DESIGN ANALYSIS CODE FOR OPTIMIZING SPACECRAFT SHIELDING AND WALL CONFIGURATION FOR ORBITAL DEBRIS AND METEOROID IMPACTS

    NASA Technical Reports Server (NTRS)

    Hill, S. A.

    1994-01-01

    BUMPERII is a modular program package employing a numerical solution technique to calculate a spacecraft's probability of no penetration (PNP) from man-made orbital debris or meteoroid impacts. The solution equation used to calculate the PNP is based on the Poisson distribution model for similar analysis of smaller craft, but reflects the more rigorous mathematical modeling of spacecraft geometry, orientation, and impact characteristics necessary for treatment of larger structures such as space station components. The technique considers the spacecraft surface in terms of a series of flat plate elements. It divides the threat environment into a number of finite cases, then evaluates each element of each threat. The code allows for impact shielding (shadowing) of one element by another in various configurations over the spacecraft exterior, and also allows for the effects of changing spacecraft flight orientation and attitude. Four main modules comprise the overall BUMPERII package: GEOMETRY, RESPONSE, SHIELD, and CONTOUR. The GEOMETRY module accepts user-generated finite element model (FEM) representations of the spacecraft geometry and creates geometry databases for both meteoroid and debris analysis. The GEOMETRY module expects input to be in either SUPERTAB Universal File Format or PATRAN Neutral File Format. The RESPONSE module creates wall penetration response databases, one for meteoroid analysis and one for debris analysis, for up to 100 unique wall configurations. This module also creates a file containing critical diameter as a function of impact velocity and impact angle for each wall configuration. The SHIELD module calculates the PNP for the modeled structure given exposure time, operating altitude, element ID ranges, and the data from the RESPONSE and GEOMETRY databases. The results appear in a summary file. SHIELD will also determine the effective area of the components and the overall model, and it can produce a data file containing the probability

  19. Optimism

    PubMed Central

    Carver, Charles S.; Scheier, Michael F.; Segerstrom, Suzanne C.

    2010-01-01

    Optimism is an individual difference variable that reflects the extent to which people hold generalized favorable expectancies for their future. Higher levels of optimism have been related prospectively to better subjective well-being in times of adversity or difficulty (i.e., controlling for previous well-being). Consistent with such findings, optimism has been linked to higher levels of engagement coping and lower levels of avoidance, or disengagement, coping. There is evidence that optimism is associated with taking proactive steps to protect one's health, whereas pessimism is associated with health-damaging behaviors. Consistent with such findings, optimism is also related to indicators of better physical health. The energetic, task-focused approach that optimists take to goals also relates to benefits in the socioeconomic world. Some evidence suggests that optimism relates to more persistence in educational efforts and to higher later income. Optimists also appear to fare better than pessimists in relationships. Although there are instances in which optimism fails to convey an advantage, and instances in which it may convey a disadvantage, those instances are relatively rare. In sum, the behavioral patterns of optimists appear to provide models of living for others to learn from. PMID:20170998

  20. A highly optimized code for calculating atomic data at neutron star magnetic field strengths using a doubly self-consistent Hartree-Fock-Roothaan method

    NASA Astrophysics Data System (ADS)

    Schimeczek, C.; Engel, D.; Wunner, G.

    2014-05-01

    account the shielding of the core potential for outer electrons by inner electrons, and an optimal finite-element decomposition of each individual longitudinal wave function. These measures largely enhance the convergence properties compared to the previous code and lead to speed-ups by factors up to two orders of magnitude compared with the implementation of the Hartree-Fock-Roothaan method used by Engel and Wunner in [D. Engel, G. Wunner, Phys. Rev. A 78, 032515 (2008)].

  1. Optimal size of stochastic Hodgkin-Huxley neuronal systems for maximal energy efficiency in coding pulse signals.

    PubMed

    Yu, Lianchun; Liu, Liwei

    2014-03-01

    The generation and conduction of action potentials (APs) represents a fundamental means of communication in the nervous system and is a metabolically expensive process. In this paper, we investigate the energy efficiency of neural systems in transferring pulse signals with APs. By analytically solving a bistable neuron model that mimics the AP generation with a particle crossing the barrier of a double well, we find the optimal number of ion channels that maximizes the energy efficiency of a neuron. We also investigate the energy efficiency of a neuron population in which the input pulse signals are represented with synchronized spikes and read out with a downstream coincidence detector neuron. We find an optimal number of neurons in neuron population, as well as the number of ion channels in each neuron that maximizes the energy efficiency. The energy efficiency also depends on the characters of the input signals, e.g., the pulse strength and the interpulse intervals. These results are confirmed by computer simulation of the stochastic Hodgkin-Huxley model with a detailed description of the ion channel random gating. We argue that the tradeoff between signal transmission reliability and energy cost may influence the size of the neural systems when energy use is constrained.

  2. Optimal size of stochastic Hodgkin-Huxley neuronal systems for maximal energy efficiency in coding pulse signals

    NASA Astrophysics Data System (ADS)

    Yu, Lianchun; Liu, Liwei

    2014-03-01

    The generation and conduction of action potentials (APs) represents a fundamental means of communication in the nervous system and is a metabolically expensive process. In this paper, we investigate the energy efficiency of neural systems in transferring pulse signals with APs. By analytically solving a bistable neuron model that mimics the AP generation with a particle crossing the barrier of a double well, we find the optimal number of ion channels that maximizes the energy efficiency of a neuron. We also investigate the energy efficiency of a neuron population in which the input pulse signals are represented with synchronized spikes and read out with a downstream coincidence detector neuron. We find an optimal number of neurons in neuron population, as well as the number of ion channels in each neuron that maximizes the energy efficiency. The energy efficiency also depends on the characters of the input signals, e.g., the pulse strength and the interpulse intervals. These results are confirmed by computer simulation of the stochastic Hodgkin-Huxley model with a detailed description of the ion channel random gating. We argue that the tradeoff between signal transmission reliability and energy cost may influence the size of the neural systems when energy use is constrained.

  3. Geometrical Optics of Dense Aerosols

    SciTech Connect

    Hay, Michael J.; Valeo, Ernest J.; Fisch, Nathaniel J.

    2013-04-24

    Assembling a free-standing, sharp-edged slab of homogeneous material that is much denser than gas, but much more rare ed than a solid, is an outstanding technological challenge. The solution may lie in focusing a dense aerosol to assume this geometry. However, whereas the geometrical optics of dilute aerosols is a well-developed fi eld, the dense aerosol limit is mostly unexplored. Yet controlling the geometrical optics of dense aerosols is necessary in preparing such a material slab. Focusing dense aerosols is shown here to be possible, but the nite particle density reduces the eff ective Stokes number of the flow, a critical result for controlled focusing. __________________________________________________

  4. Ariel's Densely Pitted Surface

    NASA Technical Reports Server (NTRS)

    1986-01-01

    This mosaic of the four highest-resolution images of Ariel represents the most detailed Voyager 2 picture of this satellite of Uranus. The images were taken through the clear filter of Voyager's narrow-angle camera on Jan. 24, 1986, at a distance of about 130,000 kilometers (80,000 miles). Ariel is about 1,200 km (750 mi) in diameter; the resolution here is 2.4 km (1.5 mi). Much of Ariel's surface is densely pitted with craters 5 to 10 km (3 to 6 mi) across. These craters are close to the threshold of detection in this picture. Numerous valleys and fault scarps crisscross the highly pitted terrain. Voyager scientists believe the valleys have formed over down-dropped fault blocks (graben); apparently, extensive faulting has occurred as a result of expansion and stretching of Ariel's crust. The largest fault valleys, near the terminator at right, as well as a smooth region near the center of this image, have been partly filled with deposits that are younger and less heavily cratered than the pitted terrain. Narrow, somewhat sinuous scarps and valleys have been formed, in turn, in these young deposits. It is not yet clear whether these sinuous features have been formed by faulting or by the flow of fluids.

    JPL manages the Voyager project for NASA's Office of Space Science.

  5. Mercury's Densely Cratered Surface

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Mariner 10 took this picture (FDS 27465) of the densely cratered surface of Mercury when the spacecraft was 18,200 kilometers (8085 miles) from the planet on March 29. The dark line across top of picture is a 'dropout' of a few TV lines of data. At lower left, a portion of a 61 kilometer (38 mile) crater shows a flow front extending across the crater floor and filling more than half of the crater. The smaller, fresh crater at center is about 25 kilometers (15 miles) in diameter. Craters as small as one kilometer (about one-half mile) across are visible in the picture.

    The Mariner 10 mission, managed by the Jet Propulsion Laboratory for NASA's Office of Space Science, explored Venus in February 1974 on the way to three encounters with Mercury-in March and September 1974 and in March 1975. The spacecraft took more than 7,000 photos of Mercury, Venus, the Earth and the Moon.

    Image Credit: NASA/JPL/Northwestern University

  6. Optimization of Neutron Spectrum in Northwest Beam Tube of Tehran Research Reactor for BNCT, by MCNP Code

    SciTech Connect

    Zamani, M.; Kasesaz, Y.; Khalafi, H.; Shayesteh, M.

    2015-07-01

    In order to gain the neutron spectrum with proper components specification for BNCT, it is necessary to design a Beam Shape Assembling (BSA), include of moderator, collimator, reflector, gamma filter and thermal neutrons filter, in front of the initial radiation beam from the source. According to the result of MCNP4C simulation, the Northwest beam tube has the most optimized neuron flux between three north beam tubes of Tehran Research Reactor (TRR). So, it has been chosen for this purpose. Simulation of the BSA has been done in four above mentioned phases. In each stage, ten best configurations of materials with different length and width were selected as the candidates for the next stage. The last BSA configuration includes of: 78 centimeters of air as an empty space, 40 centimeters of Iron plus 52 centimeters of heavy-water as moderator, 30 centimeters of water or 90 centimeters of Aluminum-Oxide as a reflector, 1 millimeters of lithium (Li) as thermal neutrons filter and finally 3 millimeters of Bismuth (Bi) as a filter of gamma radiation. The result of Calculations shows that if we use this BSA configuration for TRR Northwest beam tube, then the best neutron flux and spectrum will be achieved for BNCT. (authors)

  7. Strategies for identifying statistically significant dense regions in microarray data.

    PubMed

    Yip, Andy M; Ng, Michael K; Wu, Edmond H; Chan, Tony F

    2007-01-01

    We propose and study the notion of dense regions for the analysis of categorized gene expression data and present some searching algorithms for discovering them. The algorithms can be applied to any categorical data matrices derived from gene expression level matrices. We demonstrate that dense regions are simple but useful and statistically significant patterns that can be used to 1) identify genes and/or samples of interest and 2) eliminate genes and/or samples corresponding to outliers, noise, or abnormalities. Some theoretical studies on the properties of the dense regions are presented which allow us to characterize dense regions into several classes and to derive tailor-made algorithms for different classes of regions. Moreover, an empirical simulation study on the distribution of the size of dense regions is carried out which is then used to assess the significance of dense regions and to derive effective pruning methods to speed up the searching algorithms. Real microarray data sets are employed to test our methods. Comparisons with six other well-known clustering algorithms using synthetic and real data are also conducted which confirm the superiority of our methods in discovering dense regions. The DRIFT code and a tutorial are available as supplemental material, which can be found on the Computer Society Digital Library at http://computer.org/tcbb/archives.htm.

  8. Concatenated Coding Using Trellis-Coded Modulation

    NASA Technical Reports Server (NTRS)

    Thompson, Michael W.

    1997-01-01

    In the late seventies and early eighties a technique known as Trellis Coded Modulation (TCM) was developed for providing spectrally efficient error correction coding. Instead of adding redundant information in the form of parity bits, redundancy is added at the modulation stage thereby increasing bandwidth efficiency. A digital communications system can be designed to use bandwidth-efficient multilevel/phase modulation such as Amplitude Shift Keying (ASK), Phase Shift Keying (PSK), Differential Phase Shift Keying (DPSK) or Quadrature Amplitude Modulation (QAM). Performance gain can be achieved by increasing the number of signals over the corresponding uncoded system to compensate for the redundancy introduced by the code. A considerable amount of research and development has been devoted toward developing good TCM codes for severely bandlimited applications. More recently, the use of TCM for satellite and deep space communications applications has received increased attention. This report describes the general approach of using a concatenated coding scheme that features TCM and RS coding. Results have indicated that substantial (6-10 dB) performance gains can be achieved with this approach with comparatively little bandwidth expansion. Since all of the bandwidth expansion is due to the RS code we see that TCM based concatenated coding results in roughly 10-50% bandwidth expansion compared to 70-150% expansion for similar concatenated scheme which use convolution code. We stress that combined coding and modulation optimization is important for achieving performance gains while maintaining spectral efficiency.

  9. Conductive dense hydrogen

    NASA Astrophysics Data System (ADS)

    Eremets, M.; Troyan, I.

    2012-12-01

    Hydrogen at ambient pressures and low temperatures forms a molecular crystal which is expected to display metallic properties under megabar pressures. This metal is predicted to be superconducting with a very high critical temperature Tc of 200-400 K. The superconductor may potentially be recovered metastably at ambient pressures, and it may acquire a new quantum state as a metallic superfluid and a superconducting superfluid. Recent experiments performed at low temperatures T < 100 K showed that at record pressures of 300 GPa, hydrogen remains in the molecular state and is an insulator with a band gap of appr 2 eV. Given our current experimental and theoretical understanding, hydrogen is expected to become metallic at pressures of 400-500 GPa, beyond the current limits of static pressures achievable using diamond anvil cells. We found that at room temperature and pressure > 220 GPa, new Raman modes arose, providing evidence for the transformation to a new opaque and electrically conductive phase IV. Above 260 GPa, in the next phase V, hydrogen reflected light well. Its resistance was nearly temperature-independent over a wide temperature range, down to 30 K, indicating that the hydrogen was metallic. Releasing the pressure induced the metallic phase to transform directly into molecular hydrogen with significant hysteresis at 200 GPa and 295 K. These data were published in our paper: M. I. Eremets and I. A. Troyan "Conductive dense hydrogen." Nature Materials 10: 927-931. We will present also new results on hydrogen: phase diagram with phases IV and V determined in P,T domain up to 300 GPa and 350 K. We will also discuss possible structures of phase IV based on our Raman and infrared measurements up to 300 GPa.

  10. A quasi-dense matching approach and its calibration application with Internet photos.

    PubMed

    Wan, Yanli; Miao, Zhenjiang; Wu, Q M Jonathan; Wang, Xifu; Tang, Zhen; Wang, Zhifei

    2015-03-01

    This paper proposes a quasi-dense matching approach to the automatic acquisition of camera parameters, which is required for recovering 3-D information from 2-D images. An affine transformation-based optimization model and a new matching cost function are used to acquire quasi-dense correspondences with high accuracy in each pair of views. These correspondences can be effectively detected and tracked at the sub-pixel level in multiviews with our neighboring view selection strategy. A two-layer iteration algorithm is proposed to optimize 3-D quasi-dense points and camera parameters. In the inner layer, different optimization strategies based on local photometric consistency and a global objective function are employed to optimize the 3-D quasi-dense points and camera parameters, respectively. In the outer layer, quasi-dense correspondences are resampled to guide a new estimation and optimization process of the camera parameters. We demonstrate the effectiveness of our algorithm with several experiments.

  11. Clinical coding. Code breakers.

    PubMed

    Mathieson, Steve

    2005-02-24

    --The advent of payment by results has seen the role of the clinical coder pushed to the fore in England. --Examinations for a clinical coding qualification began in 1999. In 2004, approximately 200 people took the qualification. --Trusts are attracting people to the role by offering training from scratch or through modern apprenticeships.

  12. Dense Visual SLAM with Probabilistic Surfel Map.

    PubMed

    Yan, Zhixin; Ye, Mao; Ren, Liu

    2017-11-01

    Visual SLAM is one of the key technologies to align the virtual and real world together in Augmented Reality applications. RGBD dense Visual SLAM approaches have shown their advantages in robustness and accuracy in recent years. However, there are still several challenges such as the inconsistencies in RGBD measurements across multiple frames that could jeopardize the accuracy of both camera trajectory and scene reconstruction. In this paper, we propose a novel map representation called Probabilistic Surfel Map (PSM) for dense visual SLAM. The main idea is to maintain a globally consistent map with both photometric and geometric uncertainties encoded in order to address the inconsistency issue. The key of our PSM is proper modeling and updating of sensor measurement uncertainties, as well as the strategies to apply them for improving both the front-end pose estimation and the back-end optimization. Experimental results on publicly available datasets demonstrate major improvements with our approach over the state-of-the-art methods. Specifically, comparing with σ-DVO, we achieve a 40% reduction in absolute trajectory error and an 18% reduction in relative pose error in visual odometry, as well as an 8.5% reduction in absolute trajectory error in complete SLAM. Moreover, our PSM enables generation of a high quality dense point cloud with comparable accuracy as the state-of-the-art approach.

  13. Finding Hierarchical and Overlapping Dense Subgraphs using Nucleus Decompositions

    SciTech Connect

    Seshadhri, Comandur; Pinar, Ali; Sariyuce, Ahmet Erdem; Catalyurek, Umit

    2014-11-01

    Finding dense substructures in a graph is a fundamental graph mining operation, with applications in bioinformatics, social networks, and visualization to name a few. Yet most standard formulations of this problem (like clique, quasiclique, k-densest subgraph) are NP-hard. Furthermore, the goal is rarely to nd the \\true optimum", but to identify many (if not all) dense substructures, understand their distribution in the graph, and ideally determine a hierarchical structure among them. Current dense subgraph nding algorithms usually optimize some objective, and only nd a few such subgraphs without providing any hierarchy. It is also not clear how to account for overlaps in dense substructures. We de ne the nucleus decomposition of a graph, which represents the graph as a forest of nuclei. Each nucleus is a subgraph where smaller cliques are present in many larger cliques. The forest of nuclei is a hierarchy by containment, where the edge density increases as we proceed towards leaf nuclei. Sibling nuclei can have limited intersections, which allows for discovery of overlapping dense subgraphs. With the right parameters, the nuclear decomposition generalizes the classic notions of k-cores and k-trusses. We give provable e cient algorithms for nuclear decompositions, and empirically evaluate their behavior in a variety of real graphs. The tree of nuclei consistently gives a global, hierarchical snapshot of dense substructures, and outputs dense subgraphs of higher quality than other state-of-theart solutions. Our algorithm can process graphs with tens of millions of edges in less than an hour.

  14. Legacy Code Modernization

    NASA Technical Reports Server (NTRS)

    Hribar, Michelle R.; Frumkin, Michael; Jin, Haoqiang; Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    Over the past decade, high performance computing has evolved rapidly; systems based on commodity microprocessors have been introduced in quick succession from at least seven vendors/families. Porting codes to every new architecture is a difficult problem; in particular, here at NASA, there are many large CFD applications that are very costly to port to new machines by hand. The LCM ("Legacy Code Modernization") Project is the development of an integrated parallelization environment (IPE) which performs the automated mapping of legacy CFD (Fortran) applications to state-of-the-art high performance computers. While most projects to port codes focus on the parallelization of the code, we consider porting to be an iterative process consisting of several steps: 1) code cleanup, 2) serial optimization,3) parallelization, 4) performance monitoring and visualization, 5) intelligent tools for automated tuning using performance prediction and 6) machine specific optimization. The approach for building this parallelization environment is to build the components for each of the steps simultaneously and then integrate them together. The demonstration will exhibit our latest research in building this environment: 1. Parallelizing tools and compiler evaluation. 2. Code cleanup and serial optimization using automated scripts 3. Development of a code generator for performance prediction 4. Automated partitioning 5. Automated insertion of directives. These demonstrations will exhibit the effectiveness of an automated approach for all the steps involved with porting and tuning a legacy code application for a new architecture.

  15. Legacy Code Modernization

    NASA Technical Reports Server (NTRS)

    Hribar, Michelle R.; Frumkin, Michael; Jin, Haoqiang; Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    Over the past decade, high performance computing has evolved rapidly; systems based on commodity microprocessors have been introduced in quick succession from at least seven vendors/families. Porting codes to every new architecture is a difficult problem; in particular, here at NASA, there are many large CFD applications that are very costly to port to new machines by hand. The LCM ("Legacy Code Modernization") Project is the development of an integrated parallelization environment (IPE) which performs the automated mapping of legacy CFD (Fortran) applications to state-of-the-art high performance computers. While most projects to port codes focus on the parallelization of the code, we consider porting to be an iterative process consisting of several steps: 1) code cleanup, 2) serial optimization,3) parallelization, 4) performance monitoring and visualization, 5) intelligent tools for automated tuning using performance prediction and 6) machine specific optimization. The approach for building this parallelization environment is to build the components for each of the steps simultaneously and then integrate them together. The demonstration will exhibit our latest research in building this environment: 1. Parallelizing tools and compiler evaluation. 2. Code cleanup and serial optimization using automated scripts 3. Development of a code generator for performance prediction 4. Automated partitioning 5. Automated insertion of directives. These demonstrations will exhibit the effectiveness of an automated approach for all the steps involved with porting and tuning a legacy code application for a new architecture.

  16. Topological subsystem codes

    SciTech Connect

    Bombin, H.

    2010-03-15

    We introduce a family of two-dimensional (2D) topological subsystem quantum error-correcting codes. The gauge group is generated by two-local Pauli operators, so that two-local measurements are enough to recover the error syndrome. We study the computational power of code deformation in these codes and show that boundaries cannot be introduced in the usual way. In addition, we give a general mapping connecting suitable classical statistical mechanical models to optimal error correction in subsystem stabilizer codes that suffer from depolarizing noise.

  17. Analysis of dense particulate flow dynamics using a Euler-Lagrange approach

    NASA Astrophysics Data System (ADS)

    Desjardins, Olivier; Pepiot, Perrine

    2009-11-01

    Thermochemical conversion of biomass to biofuels relies heavily on dense particulate flows to enhance heat and mass transfers. While CFD tools can provide very valuable insights on reactor design and optimization, accurate simulations of these flows remain extremely challenging due to the complex coupling between the gas and solid phases. In this work, Lagrangian particle tracking has been implemented in the arbitrarily high order parallel LES/DNS code NGA [Desjardins et al., JCP, 2008]. Collisions are handled using a soft-sphere model, while a combined least squares/mollification approach is adopted to accurately transfer data between the Lagrangian particles and the Eulerian gas phase mesh, regardless of the particle diameter to mesh size ratio. The energy conservation properties of the numerical scheme are assessed and a detailed statistical analysis of the dynamics of a periodic fluidized bed with a uniform velocity inlet is conducted.

  18. A highly optimized code for calculating atomic data at neutron star magnetic field strengths using a doubly self-consistent Hartree-Fock-Roothaan method

    NASA Astrophysics Data System (ADS)

    Schimeczek, C.; Engel, D.; Wunner, G.

    2012-07-01

    account the shielding of the core potential for outer electrons by inner electrons, and an optimal finite-element decomposition of each individual longitudinal wave function. These measures largely enhance the convergence properties compared to the previous code, and lead to speed-ups by factors up to two orders of magnitude compared with the implementation of the Hartree-Fock-Roothaan method used by Engel and Wunner in [D. Engel, G. Wunner, Phys. Rev. A 78 (2008) 032515]. New version program summaryProgram title: HFFER II Catalogue identifier: AECC_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECC_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: v 55 130 No. of bytes in distributed program, including test data, etc.: 293 700 Distribution format: tar.gz Programming language: Fortran 95 Computer: Cluster of 1-13 HP Compaq dc5750 Operating system: Linux Has the code been vectorized or parallelized?: Yes, parallelized using MPI directives. RAM: 1 GByte per node Classification: 2.1 External routines: MPI/GFortran, LAPACK, BLAS, FMlib (included in the package) Catalogue identifier of previous version: AECC_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 302 Does the new version supersede the previous version?: Yes Nature of problem: Quantitative modellings of features observed in the X-ray spectra of isolated magnetic neutron stars are hampered by the lack of sufficiently large and accurate databases for atoms and ions up to the last fusion product, iron, at strong magnetic field strengths. Our code is intended to provide a powerful tool for calculating energies and oscillator strengths of medium-Z atoms and ions at neutron star magnetic field strengths with sufficient accuracy in a routine way to create such databases. Solution method: The

  19. New quantum MDS-convolutional codes derived from constacyclic codes

    NASA Astrophysics Data System (ADS)

    Li, Fengwei; Yue, Qin

    2015-12-01

    In this paper, we utilize a family of Hermitian dual-containing constacyclic codes to construct classical and quantum MDS convolutional codes. Our classical and quantum convolutional codes are optimal in the sense that they attain the classical (quantum) generalized Singleton bound.

  20. Dense LU Factorization on Multicore Supercomputer Nodes

    SciTech Connect

    Lifflander, Jonathan; Miller, Phil; Venkataraman, Ramprasad; Arya, Anshu; Jones, Terry R; Kale, Laxmikant V

    2012-01-01

    Dense LU factorization is a prominent benchmark used to rank the performance of supercomputers. Many implementations, including the reference code HPL, use block-cyclic distributions of matrix blocks onto a two-dimensional process grid. The process grid dimensions drive a trade-off between communication and computation and are architecture- and implementation-sensitive. We show how the critical panel factorization steps can be made less communication-bound by overlapping asynchronous collectives for pivot identification and exchange with the computation of rank-k updates. By shifting this trade-off, a modified block-cyclic distribution can beneficially exploit more available parallelism on the critical path, and reduce panel factorization's memory hierarchy contention on now-ubiquitous multi-core architectures. The missed parallelism in traditional block-cyclic distributions arises because active panel factorization, triangular solves, and subsequent broadcasts are spread over single process columns or rows (respectively) of the process grid. Increasing one dimension of the process grid decreases the number of distinct processes in the other dimension. To increase parallelism in both dimensions, periodic 'rotation' is applied to the process grid to recover the row-parallelism lost by a tall process grid. During active panel factorization, rank-1 updates stream through memory with minimal reuse. In a column-major process grid, the performance of this access pattern degrades as too many streaming processors contend for access to memory. A block-cyclic mapping in the more popular row-major order does not encounter this problem, but consequently sacrifices node and network locality in the critical pivoting steps. We introduce 'striding' to vary between the two extremes of row- and column-major process grids. As a test-bed for further mapping experiments, we describe a dense LU implementation that allows a block distribution to be defined as a general function of block

  1. Chemical Laser Computer Code Survey,

    DTIC Science & Technology

    1980-12-01

    DOCUMENTATION: Resonator Geometry Synthesis Code Requi rement NV. L. Gamiz); Incorporate General Resonator into Ray Trace Code (W. H. Southwell... Synthesis Code Development (L. R. Stidhm) CATEGRY ATIUEOPTICS KINETICS GASOYNAM41CS None * None *iNone J.LEVEL Simrple Fabry Perot Simple SaturatedGt... Synthesis Co2de Require- ment (V L. ami l ncor~orate General Resonatorn into Ray Trace Code (W. H. Southwel) Srace Optimization Algorithms and Equations (W

  2. A look at scalable dense linear algebra libraries

    SciTech Connect

    Dongarra, J.J. Dept. of Computer Science Oak Ridge National Lab., TN ); van de Geijn, R. . Dept. of Computer Sciences); Walker, D.W. )

    1992-07-01

    We discuss the essential design features of a library of scalable software for performing dense linear algebra computations on distributed memory concurrent computers. The square block scattered decomposition is proposed as a flexible and general-purpose way of decomposing most, if not all, dense matrix problems. An object- oriented interface to the library permits more portable applications to be written, and is easy to learn and use, since details of the parallel implementation are hidden from the user. Experiments on the Intel Touchstone Delta system with a prototype code that uses the square block scattered decomposition to perform LU factorization are presented and analyzed. It was found that the code was both scalable and efficient, performing at about 14 Gflop/s (double precision) for the largest problem considered.

  3. A look at scalable dense linear algebra libraries

    SciTech Connect

    Dongarra, J.J. |; van de Geijn, R.; Walker, D.W.

    1992-07-01

    We discuss the essential design features of a library of scalable software for performing dense linear algebra computations on distributed memory concurrent computers. The square block scattered decomposition is proposed as a flexible and general-purpose way of decomposing most, if not all, dense matrix problems. An object- oriented interface to the library permits more portable applications to be written, and is easy to learn and use, since details of the parallel implementation are hidden from the user. Experiments on the Intel Touchstone Delta system with a prototype code that uses the square block scattered decomposition to perform LU factorization are presented and analyzed. It was found that the code was both scalable and efficient, performing at about 14 Gflop/s (double precision) for the largest problem considered.

  4. Parametric bleaching of dense plasmas

    NASA Astrophysics Data System (ADS)

    Gradov, O. M.; Ramazashvili, R. R.

    1981-11-01

    A mechanism is proposed for the nonlinear bleaching of a dense plasma slab. In this new mechanism, the electromagnetic wave incident on the plasma decays into plasma waves and then reappears as a result of the coalescence of the plasma waves at the second boundary of the slab.

  5. Ultra-dense Hot Low Z Line Transition Opacity Simulations

    NASA Astrophysics Data System (ADS)

    Sauvan, P.; Mínguez, E.; Gil, J. M.; Rodríguez, R.; Rubiano, J. G.; Martel, P.; Angelo, P.; Schott, R.; Philippe, F.; Leboucher-Dalimier, E.; Mancini, R.; Calisti, A.

    2002-12-01

    In this work two atomic physics models (the IDEFIX code using the dicenter model and the code based on parametric potentials ANALOP) have been used to calculate the opacities for bound-bound transitions in hot ultra-dense, low Z plasmas. These simulations are in connection with experiments carried out at LULI during the last two years, focused on bound-bound radiation. In this paper H-like opacities for aluminum and fluorine plasmas have been simulated, using both theoretical models, in a wide range of densities and temperatures higher than 200 eV.

  6. SU-E-T-590: Optimizing Magnetic Field Strengths with Matlab for An Ion-Optic System in Particle Therapy Consisting of Two Quadrupole Magnets for Subsequent Simulations with the Monte-Carlo Code FLUKA

    SciTech Connect

    Baumann, K; Weber, U; Simeonov, Y; Zink, K

    2015-06-15

    Purpose: Aim of this study was to optimize the magnetic field strengths of two quadrupole magnets in a particle therapy facility in order to obtain a beam quality suitable for spot beam scanning. Methods: The particle transport through an ion-optic system of a particle therapy facility consisting of the beam tube, two quadrupole magnets and a beam monitor system was calculated with the help of Matlab by using matrices that solve the equation of motion of a charged particle in a magnetic field and field-free region, respectively. The magnetic field strengths were optimized in order to obtain a circular and thin beam spot at the iso-center of the therapy facility. These optimized field strengths were subsequently transferred to the Monte-Carlo code FLUKA and the transport of 80 MeV/u C12-ions through this ion-optic system was calculated by using a user-routine to implement magnetic fields. The fluence along the beam-axis and at the iso-center was evaluated. Results: The magnetic field strengths could be optimized by using Matlab and transferred to the Monte-Carlo code FLUKA. The implementation via a user-routine was successful. Analyzing the fluence-pattern along the beam-axis the characteristic focusing and de-focusing effects of the quadrupole magnets could be reproduced. Furthermore the beam spot at the iso-center was circular and significantly thinner compared to an unfocused beam. Conclusion: In this study a Matlab tool was developed to optimize magnetic field strengths for an ion-optic system consisting of two quadrupole magnets as part of a particle therapy facility. These magnetic field strengths could subsequently be transferred to and implemented in the Monte-Carlo code FLUKA to simulate the particle transport through this optimized ion-optic system.

  7. Scalable motion vector coding

    NASA Astrophysics Data System (ADS)

    Barbarien, Joeri; Munteanu, Adrian; Verdicchio, Fabio; Andreopoulos, Yiannis; Cornelis, Jan P.; Schelkens, Peter

    2004-11-01

    Modern video coding applications require transmission of video data over variable-bandwidth channels to a variety of terminals with different screen resolutions and available computational power. Scalable video coding is needed to optimally support these applications. Recently proposed wavelet-based video codecs employing spatial domain motion compensated temporal filtering (SDMCTF) provide quality, resolution and frame-rate scalability while delivering compression performance comparable to that of the state-of-the-art non-scalable H.264-codec. These codecs require scalable coding of the motion vectors in order to support a large range of bit-rates with optimal compression efficiency. Scalable motion vector coding algorithms based on the integer wavelet transform followed by embedded coding of the wavelet coefficients were recently proposed. In this paper, a new and fundamentally different scalable motion vector codec (MVC) using median-based motion vector prediction is proposed. Extensive experimental results demonstrate that the proposed MVC systematically outperforms the wavelet-based state-of-the-art solutions. To be able to take advantage of the proposed scalable MVC, a rate allocation mechanism capable of optimally dividing the available rate among texture and motion information is required. Two rate allocation strategies are proposed and compared. The proposed MVC and rate allocation schemes are incorporated into an SDMCTF-based video codec and the benefits of scalable motion vector coding are experimentally demonstrated.

  8. Ethical coding.

    PubMed

    Resnik, Barry I

    2009-01-01

    It is ethical, legal, and proper for a dermatologist to maximize income through proper coding of patient encounters and procedures. The overzealous physician can misinterpret reimbursement requirements or receive bad advice from other physicians and cross the line from aggressive coding to coding fraud. Several of the more common problem areas are discussed.

  9. FALCON or how to compute measures time efficiently on dynamically evolving dense complex networks?

    PubMed

    Franke, R; Ivanova, G

    2014-02-01

    A large number of topics in biology, medicine, neuroscience, psychology and sociology can be generally described via complex networks in order to investigate fundamental questions of structure, connectivity, information exchange and causality. Especially, research on biological networks like functional spatiotemporal brain activations and changes, caused by neuropsychiatric pathologies, is promising. Analyzing those so-called complex networks, the calculation of meaningful measures can be very long-winded depending on their size and structure. Even worse, in many labs only standard desktop computers are accessible to perform those calculations. Numerous investigations on complex networks regard huge but sparsely connected network structures, where most network nodes are connected to only a few others. Currently, there are several libraries available to tackle this kind of networks. A problem arises when not only a few big and sparse networks have to be analyzed, but hundreds or thousands of smaller and conceivably dense networks (e.g. in measuring brain activation over time). Then every minute per network is crucial. For these cases there several possibilities to use standard hardware more efficiently. It is not sufficient to apply just standard algorithms for dense graph characteristics. This article introduces the new library FALCON developed especially for the exploration of dense complex networks. Currently, it offers 12 different measures (like clustering coefficients), each for undirected-unweighted, undirected-weighted and directed-unweighted networks. It uses a multi-core approach in combination with comprehensive code and hardware optimizations. There is an alternative massively parallel GPU implementation for the most time-consuming measures, too. Finally, a comparing benchmark is integrated to support the choice of the most suitable library for a particular network issue. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. Warm Dense Matter: An Overview

    SciTech Connect

    Kalantar, D H; Lee, R W; Molitoris, J D

    2004-04-21

    This document provides a summary of the ''LLNL Workshop on Extreme States of Materials: Warm Dense Matter to NIF'' which was held on 20, 21, and 22 February 2002 at the Wente Conference Center in Livermore, CA. The warm dense matter regime, the transitional phase space region between cold material and hot plasma, is presently poorly understood. The drive to understand the nature of matter in this regime is sparking scientific activity worldwide. In addition to pure scientific interest, finite temperature dense matter occurs in the regimes of interest to the SSMP (Stockpile Stewardship Materials Program). So that obtaining a better understanding of WDM is important to performing effective experiments at, e.g., NIF, a primary mission of LLNL. At this workshop we examined current experimental and theoretical work performed at, and in conjunction with, LLNL to focus future activities and define our role in this rapidly emerging research area. On the experimental front LLNL plays a leading role in three of the five relevant areas and has the opportunity to become a major player in the other two. Discussion at the workshop indicated that the path forward for the experimental efforts at LLNL were two fold: First, we are doing reasonable baseline work at SPLs, HE, and High Energy Lasers with more effort encouraged. Second, we need to plan effectively for the next evolution in large scale facilities, both laser (NIF) and Light/Beam sources (LCLS/TESLA and GSI) Theoretically, LLNL has major research advantages in areas as diverse as the thermochemical approach to warm dense matter equations of state to first principles molecular dynamics simulations. However, it was clear that there is much work to be done theoretically to understand warm dense matter. Further, there is a need for a close collaboration between the generation of verifiable experimental data that can provide benchmarks of both the experimental techniques and the theoretical capabilities. The conclusion of this

  11. Uplink Coding

    NASA Technical Reports Server (NTRS)

    Pollara, Fabrizio; Hamkins, Jon; Dolinar, Sam; Andrews, Ken; Divsalar, Dariush

    2006-01-01

    This viewgraph presentation reviews uplink coding. The purpose and goals of the briefing are (1) Show a plan for using uplink coding and describe benefits (2) Define possible solutions and their applicability to different types of uplink, including emergency uplink (3) Concur with our conclusions so we can embark on a plan to use proposed uplink system (4) Identify the need for the development of appropriate technology and infusion in the DSN (5) Gain advocacy to implement uplink coding in flight projects Action Item EMB04-1-14 -- Show a plan for using uplink coding, including showing where it is useful or not (include discussion of emergency uplink coding).

  12. Transonic aerodynamics of dense gases. M.S. Thesis - Virginia Polytechnic Inst. and State Univ., Apr. 1990

    NASA Technical Reports Server (NTRS)

    Morren, Sybil Huang

    1991-01-01

    Transonic flow of dense gases for two-dimensional, steady-state, flow over a NACA 0012 airfoil was predicted analytically. The computer code used to model the dense gas behavior was a modified version of Jameson's FL052 airfoil code. The modifications to the code enabled modeling the dense gas behavior near the saturated vapor curve and critical pressure region where the fundamental derivative, Gamma, is negative. This negative Gamma region is of interest because the nonclassical gas behavior such as formation and propagation of expansion shocks, and the disintegration of inadmissible compression shocks may exist. The results indicated that dense gases with undisturbed thermodynamic states in the negative Gamma region show a significant reduction in the extent of the transonic regime as compared to that predicted by the perfect gas theory. The results support existing theories and predictions of the nonclassical, dense gas behavior from previous investigations.

  13. Radiative properties of dense nanofluids.

    PubMed

    Wei, Wei; Fedorov, Andrei G; Luo, Zhongyang; Ni, Mingjiang

    2012-09-01

    The radiative properties of dense nanofluids are investigated. For nanofluids, scattering and absorbing of electromagnetic waves by nanoparticles, as well as light absorption by the matrix/fluid in which the nanoparticles are suspended, should be considered. We compare five models for predicting apparent radiative properties of nanoparticulate media and evaluate their applicability. Using spectral absorption and scattering coefficients predicted by different models, we compute the apparent transmittance of a nanofluid layer, including multiple reflecting interfaces bounding the layer, and compare the model predictions with experimental results from the literature. Finally, we propose a new method to calculate the spectral radiative properties of dense nanofluids that shows quantitatively good agreement with the experimental results.

  14. Boundary Preserving Dense Local Regions.

    PubMed

    Kim, Jaechul; Grauman, Kristen

    2015-05-01

    We propose a dense local region detector to extract features suitable for image matching and object recognition tasks. Whereas traditional local interest operators rely on repeatable structures that often cross object boundaries (e.g., corners, scale-space blobs), our sampling strategy is driven by segmentation, and thus preserves object boundaries and shape. At the same time, whereas existing region-based representations are sensitive to segmentation parameters and object deformations, our novel approach to robustly sample dense sites and determine their connectivity offers better repeatability. In extensive experiments, we find that the proposed region detector provides significantly better repeatability and localization accuracy for object matching compared to an array of existing feature detectors. In addition, we show our regions lead to excellent results on two benchmark tasks that require good feature matching: weakly supervised foreground discovery and nearest neighbor-based object recognition.

  15. Coding Theory and Projective Spaces

    NASA Astrophysics Data System (ADS)

    Silberstein, Natalia

    2008-05-01

    The projective space of order n over a finite field F_q is a set of all subspaces of the vector space F_q^{n}. In this work, we consider error-correcting codes in the projective space, focusing mainly on constant dimension codes. We start with the different representations of subspaces in the projective space. These representations involve matrices in reduced row echelon form, associated binary vectors, and Ferrers diagrams. Based on these representations, we provide a new formula for the computation of the distance between any two subspaces in the projective space. We examine lifted maximum rank distance (MRD) codes, which are nearly optimal constant dimension codes. We prove that a lifted MRD code can be represented in such a way that it forms a block design known as a transversal design. The incidence matrix of the transversal design derived from a lifted MRD code can be viewed as a parity-check matrix of a linear code in the Hamming space. We find the properties of these codes which can be viewed also as LDPC codes. We present new bounds and constructions for constant dimension codes. First, we present a multilevel construction for constant dimension codes, which can be viewed as a generalization of a lifted MRD codes construction. This construction is based on a new type of rank-metric codes, called Ferrers diagram rank-metric codes. Then we derive upper bounds on the size of constant dimension codes which contain the lifted MRD code, and provide a construction for two families of codes, that attain these upper bounds. We generalize the well-known concept of a punctured code for a code in the projective space to obtain large codes which are not constant dimension. We present efficient enumerative encoding and decoding techniques for the Grassmannian. Finally we describe a search method for constant dimension lexicodes.

  16. An efficient fully atomistic potential model for dense fluid methane

    NASA Astrophysics Data System (ADS)

    Jiang, Chuntao; Ouyang, Jie; Zhuang, Xin; Wang, Lihua; Li, Wuming

    2016-08-01

    A fully atomistic model aimed to obtain a general purpose model for the dense fluid methane is presented. The new optimized potential for liquid simulation (OPLS) model is a rigid five site model which consists of five fixed point charges and five Lennard-Jones centers. The parameters in the potential model are determined by a fit of the experimental data of dense fluid methane using molecular dynamics simulation. The radial distribution function and the diffusion coefficient are successfully calculated for dense fluid methane at various state points. The simulated results are in good agreement with the available experimental data shown in literature. Moreover, the distribution of mean number hydrogen bonds and the distribution of pair-energy are analyzed, which are obtained from the new model and other five reference potential models. Furthermore, the space-time correlation functions for dense fluid methane are also discussed. All the numerical results demonstrate that the new OPLS model could be well utilized to investigate the dense fluid methane.

  17. Robust Nonlinear Neural Codes

    NASA Astrophysics Data System (ADS)

    Yang, Qianli; Pitkow, Xaq

    2015-03-01

    Most interesting natural sensory stimuli are encoded in the brain in a form that can only be decoded nonlinearly. But despite being a core function of the brain, nonlinear population codes are rarely studied and poorly understood. Interestingly, the few existing models of nonlinear codes are inconsistent with known architectural features of the brain. In particular, these codes have information content that scales with the size of the cortical population, even if that violates the data processing inequality by exceeding the amount of information entering the sensory system. Here we provide a valid theory of nonlinear population codes by generalizing recent work on information-limiting correlations in linear population codes. Although these generalized, nonlinear information-limiting correlations bound the performance of any decoder, they also make decoding more robust to suboptimal computation, allowing many suboptimal decoders to achieve nearly the same efficiency as an optimal decoder. Although these correlations are extremely difficult to measure directly, particularly for nonlinear codes, we provide a simple, practical test by which one can use choice-related activity in small populations of neurons to determine whether decoding is suboptimal or optimal and limited by correlated noise. We conclude by describing an example computation in the vestibular system where this theory applies. QY and XP was supported by a grant from the McNair foundation.

  18. Dense, viscous brine behavior in heterogeneous porous medium systems.

    PubMed

    Wright, D Johnson; Pedit, J A; Gasda, S E; Farthing, M W; Murphy, L L; Knight, S R; Brubaker, G R; Miller, C T

    2010-06-25

    The behavior of dense, viscous calcium bromide brine solutions used to remediate systems contaminated with dense nonaqueous phase liquids (DNAPLs) is considered in laboratory and field porous medium systems. The density and viscosity of brine solutions are experimentally investigated and functional forms fit over a wide range of mass fractions. A density of 1.7 times, and a corresponding viscosity of 6.3 times, that of water is obtained at a calcium bromide mass fraction of 0.53. A three-dimensional laboratory cell is used to investigate the establishment, persistence, and rate of removal of a stratified dense brine layer in a controlled system. Results from a field-scale experiment performed at the Dover National Test Site are used to investigate the ability to establish and maintain a dense brine layer as a component of a DNAPL recovery strategy, and to recover the brine at sufficiently high mass fractions to support the economical reuse of the brine. The results of both laboratory and field experiments show that a dense brine layer can be established, maintained, and recovered to a significant extent. Regions of unstable density profiles are shown to develop and persist in the field-scale experiment, which we attribute to regions of low hydraulic conductivity. The saturated-unsaturated, variable-density groundwater flow simulation code SUTRA is modified to describe the system of interest, and used to compare simulations to experimental observations and to investigate certain unobserved aspects of these complex systems. The model results show that the standard model formulation is not appropriate for capturing the behavior of sharp density gradients observed during the dense brine experiments. 2010 Elsevier B.V. All rights reserved.

  19. Dense, Viscous Brine Behavior in Heterogeneous Porous Medium Systems

    PubMed Central

    Wright, D. Johnson; Pedit, J.A.; Gasda, S.E.; Farthing, M.W.; Murphy, L.L.; Knight, S.R.; Brubaker, G.R.

    2010-01-01

    The behavior of dense, viscous calcium bromide brine solutions used to remediate systems contaminated with dense nonaqueous phase liquids (DNAPLs) is considered in laboratory and field porous medium systems. The density and viscosity of brine solutions are experimentally investigated and functional forms fit over a wide range of mass fractions. A density of 1.7 times, and a corresponding viscosity of 6.3 times, that of water is obtained at a calcium bromide mass fraction of 0.53. A three-dimensional laboratory cell is used to investigate the establishment, persistence, and rate of removal of a stratified dense brine layer in a controlled system. Results from a field-scale experiment performed at the Dover National Test Site are used to investigate the ability to establish and maintain a dense brine layer as a component of a DNAPL recovery strategy, and to recover the brine at sufficiently high mass fractions to support the economical reuse of the brine. The results of both laboratory and field experiments show that a dense brine layer can be established, maintained, and recovered to a significant extent. Regions of unstable density profiles are shown to develop and persist in the field-scale experiment, which we attribute to regions of low hydraulic conductivity. The saturated-unsaturated, variable-density ground-water flow simulation code SUTRA is modified to describe the system of interest, and used to compare simulations to experimental observations and to investigate certain unobserved aspects of these complex systems. The model results show that the standard model formulation is not appropriate for capturing the behavior of sharp density gradients observed during the dense brine experiments. PMID:20444520

  20. Sharing code.

    PubMed

    Kubilius, Jonas

    2014-01-01

    Sharing code is becoming increasingly important in the wake of Open Science. In this review I describe and compare two popular code-sharing utilities, GitHub and Open Science Framework (OSF). GitHub is a mature, industry-standard tool but lacks focus towards researchers. In comparison, OSF offers a one-stop solution for researchers but a lot of functionality is still under development. I conclude by listing alternative lesser-known tools for code and materials sharing.

  1. Constructing Dense Graphs with Unique Hamiltonian Cycles

    ERIC Educational Resources Information Center

    Lynch, Mark A. M.

    2012-01-01

    It is not difficult to construct dense graphs containing Hamiltonian cycles, but it is difficult to generate dense graphs that are guaranteed to contain a unique Hamiltonian cycle. This article presents an algorithm for generating arbitrarily large simple graphs containing "unique" Hamiltonian cycles. These graphs can be turned into dense graphs…

  2. N-Body Evolution of Dense Clusters of Compact Stars

    NASA Astrophysics Data System (ADS)

    Lee, Man Hoi

    1993-11-01

    The dynamical evolution of dense clusters of compact stars is studied using direct N-body simulations. The formation of binaries and their subsequent merging by gravitational radiation emission is important to the evolution of such clusters. Aarseth's NBODY5 N-body simulation code is modified to include the lowest order gravitational radiation force during two-body encounters and to handle the decay and merger of radiating binaries. It is used to study the evolution of small-N (= 1000) clusters with different initial velocity dispersions. The initial evolution is similar to that obtained by Quinlan & Shapiro (1989) using a multimass Fokker-Planck code and shows orderly formation of heavy objects. However, the late evolution differs qualitatively from previous results. In particular, we find runaway growth for the most massive object in the cluster: it acquires a mass much larger than that of the other objects and is detached from the smooth mass spectrum of the rest of the objects. We discuss why the Fokker-Planck equation with a mean-rate approach to the merger process cannot model runaway growth, and we present arguments to show that merger by gravitational radiation is expected to be unstable to runaway growth. The results suggest that a seed massive black hole can be formed by runaway growth in a dense cluster of compact stars. The possibility of runaway growth in dense clusters of normal stars is also discussed.

  3. Pyramid image codes

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    1990-01-01

    All vision systems, both human and machine, transform the spatial image into a coded representation. Particular codes may be optimized for efficiency or to extract useful image features. Researchers explored image codes based on primary visual cortex in man and other primates. Understanding these codes will advance the art in image coding, autonomous vision, and computational human factors. In cortex, imagery is coded by features that vary in size, orientation, and position. Researchers have devised a mathematical model of this transformation, called the Hexagonal oriented Orthogonal quadrature Pyramid (HOP). In a pyramid code, features are segregated by size into layers, with fewer features in the layers devoted to large features. Pyramid schemes provide scale invariance, and are useful for coarse-to-fine searching and for progressive transmission of images. The HOP Pyramid is novel in three respects: (1) it uses a hexagonal pixel lattice, (2) it uses oriented features, and (3) it accurately models most of the prominent aspects of primary visual cortex. The transform uses seven basic features (kernels), which may be regarded as three oriented edges, three oriented bars, and one non-oriented blob. Application of these kernels to non-overlapping seven-pixel neighborhoods yields six oriented, high-pass pyramid layers, and one low-pass (blob) layer.

  4. Embedded foveation image coding.

    PubMed

    Wang, Z; Bovik, A C

    2001-01-01

    The human visual system (HVS) is highly space-variant in sampling, coding, processing, and understanding. The spatial resolution of the HVS is highest around the point of fixation (foveation point) and decreases rapidly with increasing eccentricity. By taking advantage of this fact, it is possible to remove considerable high-frequency information redundancy from the peripheral regions and still reconstruct a perceptually good quality image. Great success has been obtained previously by a class of embedded wavelet image coding algorithms, such as the embedded zerotree wavelet (EZW) and the set partitioning in hierarchical trees (SPIHT) algorithms. Embedded wavelet coding not only provides very good compression performance, but also has the property that the bitstream can be truncated at any point and still be decoded to recreate a reasonably good quality image. In this paper, we propose an embedded foveation image coding (EFIC) algorithm, which orders the encoded bitstream to optimize foveated visual quality at arbitrary bit-rates. A foveation-based image quality metric, namely, foveated wavelet image quality index (FWQI), plays an important role in the EFIC system. We also developed a modified SPIHT algorithm to improve the coding efficiency. Experiments show that EFIC integrates foveation filtering with foveated image coding and demonstrates very good coding performance and scalability in terms of foveated image quality measurement.

  5. Coordinated design of coding and modulation systems

    NASA Technical Reports Server (NTRS)

    Massey, J. L.; Ancheta, T.; Johannesson, R.; Lauer, G.; Lee, L.

    1976-01-01

    The joint optimization of the coding and modulation systems employed in telemetry systems was investigated. Emphasis was placed on formulating inner and outer coding standards used by the Goddard Spaceflight Center. Convolutional codes were found that are nearly optimum for use with Viterbi decoding in the inner coding of concatenated coding systems. A convolutional code, the unit-memory code, was discovered and is ideal for inner system usage because of its byte-oriented structure. Simulations of sequential decoding on the deep-space channel were carried out to compare directly various convolutional codes that are proposed for use in deep-space systems.

  6. Probing Cold Dense Nuclear Matter

    NASA Astrophysics Data System (ADS)

    Subedi, R.; Shneor, R.; Monaghan, P.; Anderson, B. D.; Aniol, K.; Annand, J.; Arrington, J.; Benaoum, H.; Benmokhtar, F.; Boeglin, W.; Chen, J.-P.; Choi, Seonho; Cisbani, E.; Craver, B.; Frullani, S.; Garibaldi, F.; Gilad, S.; Gilman, R.; Glamazdin, O.; Hansen, J.-O.; Higinbotham, D. W.; Holmstrom, T.; Ibrahim, H.; Igarashi, R.; de Jager, C. W.; Jans, E.; Jiang, X.; Kaufman, L. J.; Kelleher, A.; Kolarkar, A.; Kumbartzki, G.; LeRose, J. J.; Lindgren, R.; Liyanage, N.; Margaziotis, D. J.; Markowitz, P.; Marrone, S.; Mazouz, M.; Meekins, D.; Michaels, R.; Moffit, B.; Perdrisat, C. F.; Piasetzky, E.; Potokar, M.; Punjabi, V.; Qiang, Y.; Reinhold, J.; Ron, G.; Rosner, G.; Saha, A.; Sawatzky, B.; Shahinyan, A.; Širca, S.; Slifer, K.; Solvignon, P.; Sulkosky, V.; Urciuoli, G. M.; Voutier, E.; Watson, J. W.; Weinstein, L. B.; Wojtsekhowski, B.; Wood, S.; Zheng, X.-C.; Zhu, L.

    2008-06-01

    The protons and neutrons in a nucleus can form strongly correlated nucleon pairs. Scattering experiments, in which a proton is knocked out of the nucleus with high-momentum transfer and high missing momentum, show that in carbon-12 the neutron-proton pairs are nearly 20 times as prevalent as proton-proton pairs and, by inference, neutron-neutron pairs. This difference between the types of pairs is due to the nature of the strong force and has implications for understanding cold dense nuclear systems such as neutron stars.

  7. Probing cold dense nuclear matter.

    PubMed

    Subedi, R; Shneor, R; Monaghan, P; Anderson, B D; Aniol, K; Annand, J; Arrington, J; Benaoum, H; Benmokhtar, F; Boeglin, W; Chen, J-P; Choi, Seonho; Cisbani, E; Craver, B; Frullani, S; Garibaldi, F; Gilad, S; Gilman, R; Glamazdin, O; Hansen, J-O; Higinbotham, D W; Holmstrom, T; Ibrahim, H; Igarashi, R; de Jager, C W; Jans, E; Jiang, X; Kaufman, L J; Kelleher, A; Kolarkar, A; Kumbartzki, G; Lerose, J J; Lindgren, R; Liyanage, N; Margaziotis, D J; Markowitz, P; Marrone, S; Mazouz, M; Meekins, D; Michaels, R; Moffit, B; Perdrisat, C F; Piasetzky, E; Potokar, M; Punjabi, V; Qiang, Y; Reinhold, J; Ron, G; Rosner, G; Saha, A; Sawatzky, B; Shahinyan, A; Sirca, S; Slifer, K; Solvignon, P; Sulkosky, V; Urciuoli, G M; Voutier, E; Watson, J W; Weinstein, L B; Wojtsekhowski, B; Wood, S; Zheng, X-C; Zhu, L

    2008-06-13

    The protons and neutrons in a nucleus can form strongly correlated nucleon pairs. Scattering experiments, in which a proton is knocked out of the nucleus with high-momentum transfer and high missing momentum, show that in carbon-12 the neutron-proton pairs are nearly 20 times as prevalent as proton-proton pairs and, by inference, neutron-neutron pairs. This difference between the types of pairs is due to the nature of the strong force and has implications for understanding cold dense nuclear systems such as neutron stars.

  8. Probing Cold Dense Nuclear Matter

    SciTech Connect

    Subedi, Ramesh; Shneor, R.; Monaghan, Peter; Anderson, Bryon; Aniol, Konrad; Annand, John; Arrington, John; Benaoum, Hachemi; Benmokhtar, Fatiha; Bertozzi, William; Boeglin, Werner; Chen, Jian-Ping; Choi, Seonho; Cisbani, Evaristo; Craver, Brandon; Frullani, Salvatore; Garibaldi, Franco; Gilad, Shalev; Gilman, Ronald; Glamazdin, Oleksandr; Hansen, Jens-Ole; Higinbotham, Douglas; Holmstrom, Timothy; Ibrahim, Hassan; Igarashi, Ryuichi; De Jager, Cornelis; Jans, Eddy; Jiang, Xiaodong; Kaufman, Lisa; Kelleher, Aidan; Kolarkar, Ameya; Kumbartzki, Gerfried; LeRose, John; Lindgren, Richard; Liyanage, Nilanga; Margaziotis, Demetrius; Markowitz, Pete; Marrone, Stefano; Mazouz, Malek; Meekins, David; Michaels, Robert; Moffit, Bryan; Perdrisat, Charles; Piasetzky, Eliazer; Potokar, Milan; Punjabi, Vina; Qiang, Yi; Reinhold, Joerg; Ron, Guy; Rosner, Guenther; Saha, Arunava; Sawatzky, Bradley; Shahinyan, Albert; Sirca, Simon; Slifer, Karl; Solvignon, Patricia; Sulkosky, Vince; Sulkosky, Vincent; Sulkosky, Vince; Sulkosky, Vincent; Urciuoli, Guido; Voutier, Eric; Watson, John; Weinstein, Lawrence; Wojtsekhowski, Bogdan; Wood, Stephen; Zheng, Xiaochao; Zhu, Lingyan

    2008-06-01

    The protons and neutrons in a nucleus can form strongly correlated nucleon pairs. Scattering experiments, in which a proton is knocked out of the nucleus with high-momentum transfer and high missing momentum, show that in carbon-12 the neutron-proton pairs are nearly 20 times as prevalent as proton-proton pairs and, by inference, neutron-neutron pairs. This difference between the types of pairs is due to the nature of the strong force and has implications for understanding cold dense nuclear systems such as neutron stars.

  9. Language Recognition via Sparse Coding

    DTIC Science & Technology

    2016-09-08

    a posteriori (MAP) adaptation scheme that further optimizes the discriminative quality of sparse-coded speech fea - tures. We empirically validate the...significantly improve the discriminative quality of sparse-coded speech fea - tures. In Section 4, we evaluate the proposed approaches against an i-vector

  10. QPhiX Code Generator

    SciTech Connect

    Joo, Balint

    2014-09-16

    A simple code-generator to generate the low level code kernels used by the QPhiX Library for Lattice QCD. Generates Kernels for Wilson-Dslash, and Wilson-Clover kernels. Can be reused to write other optimized kernels for Intel Xeon Phi(tm), Intel Xeon(tm) and potentially other architectures.

  11. QPhiX Code Generator

    SciTech Connect

    Joo, Balint

    2014-09-16

    A simple code-generator to generate the low level code kernels used by the QPhiX Library for Lattice QCD. Generates Kernels for Wilson-Dslash, and Wilson-Clover kernels. Can be reused to write other optimized kernels for Intel Xeon Phi(tm), Intel Xeon(tm) and potentially other architectures.

  12. DNA codes

    SciTech Connect

    Torney, D. C.

    2001-01-01

    We have begun to characterize a variety of codes, motivated by potential implementation as (quaternary) DNA n-sequences, with letters denoted A, C The first codes we studied are the most reminiscent of conventional group codes. For these codes, Hamming similarity was generalized so that the score for matched letters takes more than one value, depending upon which letters are matched [2]. These codes consist of n-sequences satisfying an upper bound on the similarities, summed over the letter positions, of distinct codewords. We chose similarity 2 for matches of letters A and T and 3 for matches of the letters C and G, providing a rough approximation to double-strand bond energies in DNA. An inherent novelty of DNA codes is 'reverse complementation'. The latter may be defined, as follows, not only for alphabets of size four, but, more generally, for any even-size alphabet. All that is required is a matching of the letters of the alphabet: a partition into pairs. Then, the reverse complement of a codeword is obtained by reversing the order of its letters and replacing each letter by its match. For DNA, the matching is AT/CG because these are the Watson-Crick bonding pairs. Reversal arises because two DNA sequences form a double strand with opposite relative orientations. Thus, as will be described in detail, because in vitro decoding involves the formation of double-stranded DNA from two codewords, it is reasonable to assume - for universal applicability - that the reverse complement of any codeword is also a codeword. In particular, self-reverse complementary codewords are expressly forbidden in reverse-complement codes. Thus, an appropriate distance between all pairs of codewords must, when large, effectively prohibit binding between the respective codewords: to form a double strand. Only reverse-complement pairs of codewords should be able to bind. For most applications, a DNA code is to be bi-partitioned, such that the reverse-complementary pairs are separated

  13. Inference by replication in densely connected systems

    SciTech Connect

    Neirotti, Juan P.; Saad, David

    2007-10-15

    An efficient Bayesian inference method for problems that can be mapped onto dense graphs is presented. The approach is based on message passing where messages are averaged over a large number of replicated variable systems exposed to the same evidential nodes. An assumption about the symmetry of the solutions is required for carrying out the averages; here we extend the previous derivation based on a replica-symmetric- (RS)-like structure to include a more complex one-step replica-symmetry-breaking-like (1RSB-like) ansatz. To demonstrate the potential of the approach it is employed for studying critical properties of the Ising linear perceptron and for multiuser detection in code division multiple access (CDMA) under different noise models. Results obtained under the RS assumption in the noncritical regime give rise to a highly efficient signal detection algorithm in the context of CDMA; while in the critical regime one observes a first-order transition line that ends in a continuous phase transition point. Finite size effects are also observed. While the 1RSB ansatz is not required for the original problems, it was applied to the CDMA signal detection problem with a more complex noise model that exhibits RSB behavior, resulting in an improvement in performance.

  14. Optically transparent dense colloidal gels

    PubMed Central

    Zupkauskas, M.; Lan, Y.; Joshi, D.; Ruff, Z.

    2017-01-01

    Traditionally it has been difficult to study the porous structure of dense colloidal gels and (macro) molecular transport through them simply because of the difference in refractive index between the colloid material and the continuous fluid phase surrounding it, rendering the samples opaque even at low colloidal volume fractions. Here, we demonstrate a novel colloidal gel that can be refractive index-matched in aqueous solutions owing to the low refractive index of fluorinated latex (FL)-particles (n = 1.37). Synthesizing them from heptafluorobutyl methacrylate using emulsion polymerization, we demonstrate that they can be functionalized with short DNA sequences via a dense brush-layer of polystyrene-b-poly(ethylene oxide) block-copolymers (PS-PEO). The block-copolymer, holding an azide group at the free PEO end, was grafted to the latex particle utilizing a swelling–deswelling method. Subsequently, DNA was covalently attached to the azide-end of the block copolymer via a strain-promoted alkyne–azide click reaction. For comparison, we present a structural study of single gels made of FL-particles only and composite gels made of a percolating FL-colloid gel coated with polystyrene (PS) colloids. Further we demonstrate that the diffusivity of tracer colloids dispersed deep inside a refractive index matched FL-colloidal gel can be measured as function of the local confinement using Dynamic Differential Microscopy (DDM). PMID:28970935

  15. Magnetism in Dense Quark Matter

    NASA Astrophysics Data System (ADS)

    Ferrer, Efrain J.; de la Incera, Vivian

    We review the mechanisms via which an external magnetic field can affect the ground state of cold and dense quark matter. In the absence of a magnetic field, at asymptotically high densities, cold quark matter is in the Color-Flavor-Locked (CFL) phase of color superconductivity characterized by three scales: the superconducting gap, the gluon Meissner mass, and the baryonic chemical potential. When an applied magnetic field becomes comparable with each of these scales, new phases and/or condensates may emerge. They include the magnetic CFL (MCFL) phase that becomes relevant for fields of the order of the gap scale; the paramagnetic CFL, important when the field is of the order of the Meissner mass, and a spin-one condensate associated to the magnetic moment of the Cooper pairs, significant at fields of the order of the chemical potential. We discuss the equation of state (EoS) of MCFL matter for a large range of field values and consider possible applications of the magnetic effects on dense quark matter to the astrophysics of compact stars.

  16. Dense crystalline packings of ellipsoids

    NASA Astrophysics Data System (ADS)

    Jin, Weiwei; Jiao, Yang; Liu, Lufeng; Yuan, Ye; Li, Shuixiang

    2017-03-01

    An ellipsoid, the simplest nonspherical shape, has been extensively used as a model for elongated building blocks for a wide spectrum of molecular, colloidal, and granular systems. Yet the densest packing of congruent hard ellipsoids, which is intimately related to the high-density phase of many condensed matter systems, is still an open problem. We discover an unusual family of dense crystalline packings of self-dual ellipsoids (ratios of the semiaxes α : √{α }:1 ), containing 24 particles with a quasi-square-triangular (SQ-TR) tiling arrangement in the fundamental cell. The associated packing density ϕ exceeds that of the densest known SM2 crystal [ A. Donev et al., Phys. Rev. Lett. 92, 255506 (2004), 10.1103/PhysRevLett.92.255506] for aspect ratios α in (1.365, 1.5625), attaining a maximal ϕ ≈0.758 06 ... at α = 93 /64 . We show that the SQ-TR phase derived from these dense packings is thermodynamically stable at high densities over the aforementioned α range and report a phase diagram for self-dual ellipsoids. The discovery of the SQ-TR crystal suggests organizing principles for nonspherical particles and self-assembly of colloidal systems.

  17. Random Coding Bounds for DNA Codes Based on Fibonacci Ensembles of DNA Sequences

    DTIC Science & Technology

    2008-07-01

    COVERED (From - To) 6 Jul 08 – 11 Jul 08 4. TITLE AND SUBTITLE RANDOM CODING BOUNDS FOR DNA CODES BASED ON FIBONACCI ENSEMBLES OF DNA SEQUENCES ... sequences which are generalizations of the Fibonacci sequences . 15. SUBJECT TERMS DNA Codes, Fibonacci Ensembles, DNA Computing, Code Optimization 16...coding bound on the rate of DNA codes is proved. To obtain the bound, we use some ensembles of DNA sequences which are generalizations of the Fibonacci

  18. Sharing code

    PubMed Central

    Kubilius, Jonas

    2014-01-01

    Sharing code is becoming increasingly important in the wake of Open Science. In this review I describe and compare two popular code-sharing utilities, GitHub and Open Science Framework (OSF). GitHub is a mature, industry-standard tool but lacks focus towards researchers. In comparison, OSF offers a one-stop solution for researchers but a lot of functionality is still under development. I conclude by listing alternative lesser-known tools for code and materials sharing. PMID:25165519

  19. Subspace-Aware Index Codes

    DOE PAGES

    Kailkhura, Bhavya; Theagarajan, Lakshmi Narasimhan; Varshney, Pramod K.

    2017-04-12

    In this paper, we generalize the well-known index coding problem to exploit the structure in the source-data to improve system throughput. In many applications (e.g., multimedia), the data to be transmitted may lie (or can be well approximated) in a low-dimensional subspace. We exploit this low-dimensional structure of the data using an algebraic framework to solve the index coding problem (referred to as subspace-aware index coding) as opposed to the traditional index coding problem which is subspace-unaware. Also, we propose an efficient algorithm based on the alternating minimization approach to obtain near optimal index codes for both subspace-aware and -unawaremore » cases. In conclusion, our simulations indicate that under certain conditions, a significant throughput gain (about 90%) can be achieved by subspace-aware index codes over conventional subspace-unaware index codes.« less

  20. Dynamics and evolution of dense stellar systems

    NASA Astrophysics Data System (ADS)

    Fregeau, John M.

    2004-10-01

    The research presented in this thesis comprises a theoretical study of several aspects relating to the dynamics and evolution of dense stellar systems such as globular clusters. First, I present the results of a study of mass segregation in two-component star clusters, based on a large number of numerical N-body simulations using our Monte-Carlo code. Heavy objects, which could represent stellar remnants such as neutron stars or black holes, exhibit behavior that is in quantitative agreement with simple analytical arguments. Light objects, which could represent free-floating planets or brown dwarfs, are predominantly lost from the cluster, as expected from simple analytical arguments, but may remain in the halo in larger numbers than expected. Using a recent null detection of planetary-mass microlensing events in M22, I find an upper limit of ˜25% at the 63% confidence level for the current mass fraction of M22 in the form of very low-mass objects. Turning to more realistic clusters, I present a study of the evolution of clusters containing primordial binaries, based on an enhanced version of the Monte-Carlo code that treats binary interactions via cross sections and analytical prescriptions. All models exhibit a long-lived “binary burning” phase lasting many tens of relaxation times. The structural parameters of the models during this phase match well those of most observed Galactic globular clusters. At the end of this phase, clusters that have survived tidal disruption undergo deep core collapse, followed by gravothermal oscillations. The results clearly show that the presence of even a small fraction of binaries in a cluster is sufficient to support the core against collapse significantly beyond the normal core collapse time predicted without the presence of binaries. For tidally truncated systems, collapse is delayed sufficiently that the cluster will undergo complete tidal disruption before core collapse. Moving a step beyond analytical prescriptions, I

  1. Dense deformation field estimation for brain intraoperative images registration

    NASA Astrophysics Data System (ADS)

    De Craene, Mathieu S.; du Bois d'Aische, Aloys; Talos, Ion-Florin; Ferrant, Matthieu; Black, Peter M.; Jolesz, Ferenc; Kikinis, Ron; Macq, Benoit; Warfield, Simon K.

    2004-05-01

    A new fast non rigid registration algorithm is presented. The algorithm estimates a dense deformation field by optimizing a criterion that measures image similarity by mutual information and regularizes with a linear elastic energy term. The optimal deformation field is found using a Simultaneous Perturbation Stochastic Approximation to the gradient. The implementation is parallelized for symmetric multi-processor architectures. This algorithm was applied to capture non-rigid brain deformations that occur during neurosurgery. Segmentation of the intra-operative data is not required but preoperative segmentation of the brain allows the algorithm to be robust to artifacts due to the craniotomy.

  2. Optimized periodic verification testing blended risk and performance-based MOV inservice test program an application of ASME code case OMN-1

    SciTech Connect

    Sellers, C.; Fleming, K.; Bidwell, D.; Forbes, P.

    1996-12-01

    This paper presents an application of ASME Code Case OMN-1 to the GL 89-10 Program at the South Texas Project Electric Generating Station (STPEGS). Code Case OMN-1 provides guidance for a performance-based MOV inservice test program that can be used for periodic verification testing and allows consideration of risk insights. Blended probabilistic and deterministic evaluation techniques were used to establish inservice test strategies including both test methods and test frequency. Described in the paper are the methods and criteria for establishing MOV safety significance based on the STPEGS probabilistic safety assessment, deterministic considerations of MOV performance characteristics and performance margins, the expert panel evaluation process, and the development of inservice test strategies. Test strategies include a mix of dynamic and static testing as well as MOV exercising.

  3. Multiple Satellite Trajectory Optimization

    DTIC Science & Technology

    2004-12-01

    SOLVING OPTIMAL CONTROL PROBLEMS ........................................5...OPTIMIZATION A. SOLVING OPTIMAL CONTROL PROBLEMS The driving principle used to solve optimal control problems was first formalized by the Soviet...methods and processes of solving optimal control problems , this section will demonstrate how the formulations work as expected. Once coded, the

  4. DPIS for warm dense matter

    SciTech Connect

    Kondo, K.; Kanesue, T.; Horioka, K.; Okamura, M.

    2010-05-23

    Warm Dense Matter (WDM) offers an challenging problem because WDM, which is beyond ideal plasma, is in a low temperature and high density state with partially degenerate electrons and coupled ions. WDM is a common state of matter in astrophysical objects such as cores of giant planets and white dwarfs. The WDM studies require large energy deposition into a small target volume in a shorter time than the hydrodynamical time and need uniformity across the full thickness of the target. Since moderate energy ion beams ({approx} 0.3 MeV/u) can be useful tool for WDM physics, we propose WDM generation using Direct Plasma Injection Scheme (DPIS). In the DPIS, laser ion source is connected to the Radio Frequency Quadrupole (RFQ) linear accelerator directly without the beam transport line. DPIS with a realistic final focus and a linear accelerator can produce WDM.

  5. Uniformly dense polymeric foam body

    DOEpatents

    Whinnery, Jr., Leroy

    2003-07-15

    A method for providing a uniformly dense polymer foam body having a density between about 0.013 g/cm.sup.3 to about 0.5 g/cm.sup.3 is disclosed. The method utilizes a thermally expandable polymer microsphere material wherein some of the microspheres are unexpanded and some are only partially expanded. It is shown that by mixing the two types of materials in appropriate ratios to achieve the desired bulk final density, filling a mold with this mixture so as to displace all or essentially all of the internal volume of the mold, heating the mold for a predetermined interval at a temperature above about 130.degree. C., and then cooling the mold to a temperature below 80.degree. C. the molded part achieves a bulk density which varies by less then about .+-.6% everywhere throughout the part volume.

  6. Velocity coherence in dense cores

    NASA Astrophysics Data System (ADS)

    Goodman, Alyssa A.; Barranco, Joseph A.; Wilner, David J.; Heyer, Mark H.

    1997-02-01

    At the meeting, we presented a summary of two papers which support the hypothesis that the molecular clouds which contain star-forming low-mass dense cores are self-similar in nature on size scales larger than an inner scale, Rcoh, and that within Rcoh, the cores are ``coherent,'' in that their filling factor is large and they are characterized by a very small, roughly constant, mildly supersonic velocity dispersion. We expect these two papers, by Barranco & Goodman [1] and Goodman, Barranco, Wilner, & Heyer, to appear in the Astrophysical Journal within the coming year. Here, we present a short summary of our results. The interested reader is urged to consult the on-line version of this work at cfa-www.harvard.edu/~agoodman/vel_coh.html [2].

  7. Neutrino Oscillations in Dense Matter

    NASA Astrophysics Data System (ADS)

    Lobanov, A. E.

    2017-03-01

    A modification of the electroweak theory, where the fermions with the same electroweak quantum numbers are combined in multiplets and are treated as different quantum states of a single particle, is proposed. In this model, mixing and oscillations of particles arise as a direct consequence of the general principles of quantum field theory. The developed approach enables one to calculate the probabilities of the processes taking place in the detector at long distances from the particle source. Calculations of higher-order processes, including computation of the contributions due to radiative corrections, can be performed in the framework of the perturbation theory using the regular diagram technique. As a result, the analog to the Dirac-Schwinger equation of quantum electrodynamics describing neutrino oscillations and its spin rotation in dense matter can be obtained.

  8. Viscoelastic behavior of dense microemulsions

    NASA Astrophysics Data System (ADS)

    Cametti, C.; Codastefano, P.; D'arrigo, G.; Tartaglia, P.; Rouch, J.; Chen, S. H.

    1990-09-01

    We have performed extensive measurements of shear viscosity, ultrasonic absorption, and sound velocity in a ternary system consisting of water-decane-sodium di(2-ethylhexyl)sulfo- succinate(AOT), in the one-phase region where it forms a water-in-oil microemulsion. We observe a rapid increase of the static shear viscosity in the dense microemulsion region. Correspondingly the sound absorption shows unambiguous evidence of a viscoelastic behavior. The absorption data for various volume fractions and temperatures can be reduced to a universal curve by scaling both the absorption and the frequency by the measured static shear viscosity. The sound absorption can be interpreted as coming from the high-frequency tail of the viscoelastic relaxation, describable by a Cole-Cole relaxation formula with unusually small elastic moduli.

  9. Extended thermodynamics of dense gases

    NASA Astrophysics Data System (ADS)

    Arima, T.; Taniguchi, S.; Ruggeri, T.; Sugiyama, M.

    2012-11-01

    We study extended thermodynamics of dense gases by adopting the system of field equations with a different hierarchy structure to that adopted in the previous works. It is the theory of 14 fields of mass density, velocity, temperature, viscous stress, dynamic pressure, and heat flux. As a result, most of the constitutive equations can be determined explicitly by the caloric and thermal equations of state. It is shown that the rarefied-gas limit of the theory is consistent with the kinetic theory of gases. We also analyze three physically important systems, that is, a gas with the virial equations of state, a hard-sphere system, and a van der Waals fluid, by using the general theory developed in the former part of the present work.

  10. Diagnostic of dense plasmas using X-ray spectra

    NASA Astrophysics Data System (ADS)

    Yu, Q. Z.; Zhang, J.; Li, Y. T.; Zhang, Z.; Jin, Z.; Lu, X.; Li, J.; Yu, Y. N.; Jiang, X. H.; Li, W. H.; Liu, S. Y.

    2005-12-01

    The spectrally and spatially resolved X-ray spectra emitted from a dense aluminum plasma produced by 500 J, 1 ns Nd:glass laser pulses are presented. Six primary hydrogen-like and helium-like lines are identified and simulated with the atomic physics code FLY. We find that the plasma is almost completely ionized under the experimental conditions. The highest electron density we measured reaches up to 1023 cm-3. The spatial variations of the electron temperature and density are compared with the simulations of MEDUSA hydrocode for different geometry targets. The results indicate that lateral expansion of the plasma produced with this laser beam plays an important role.

  11. The performance of dense medium processes

    SciTech Connect

    Horsfall, D.W.

    1993-12-31

    Dense medium washing in baths and cyclones is widely carried out in South Africa. The paper shows the reason for the preferred use of dense medium processes rather than gravity concentrators such as jigs. The factors leading to efficient separation in baths are listed and an indication given of the extent to which these factors may be controlled and embodied in the deployment of baths and dense medium cyclones in the planning stages of a plant.

  12. Speech coding

    SciTech Connect

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  13. Dense module enumeration in biological networks

    NASA Astrophysics Data System (ADS)

    Tsuda, Koji; Georgii, Elisabeth

    2009-12-01

    Analysis of large networks is a central topic in various research fields including biology, sociology, and web mining. Detection of dense modules (a.k.a. clusters) is an important step to analyze the networks. Though numerous methods have been proposed to this aim, they often lack mathematical rigorousness. Namely, there is no guarantee that all dense modules are detected. Here, we present a novel reverse-search-based method for enumerating all dense modules. Furthermore, constraints from additional data sources such as gene expression profiles or customer profiles can be integrated, so that we can systematically detect dense modules with interesting profiles. We report successful applications in human protein interaction network analyses.

  14. Understanding shape entropy through local dense packing

    PubMed Central

    van Anders, Greg; Klotsa, Daphne; Ahmed, N. Khalid; Engel, Michael; Glotzer, Sharon C.

    2014-01-01

    Entropy drives the phase behavior of colloids ranging from dense suspensions of hard spheres or rods to dilute suspensions of hard spheres and depletants. Entropic ordering of anisotropic shapes into complex crystals, liquid crystals, and even quasicrystals was demonstrated recently in computer simulations and experiments. The ordering of shapes appears to arise from the emergence of directional entropic forces (DEFs) that align neighboring particles, but these forces have been neither rigorously defined nor quantified in generic systems. Here, we show quantitatively that shape drives the phase behavior of systems of anisotropic particles upon crowding through DEFs. We define DEFs in generic systems and compute them for several hard particle systems. We show they are on the order of a few times the thermal energy (kBT) at the onset of ordering, placing DEFs on par with traditional depletion, van der Waals, and other intrinsic interactions. In experimental systems with these other interactions, we provide direct quantitative evidence that entropic effects of shape also contribute to self-assembly. We use DEFs to draw a distinction between self-assembly and packing behavior. We show that the mechanism that generates directional entropic forces is the maximization of entropy by optimizing local particle packing. We show that this mechanism occurs in a wide class of systems and we treat, in a unified way, the entropy-driven phase behavior of arbitrary shapes, incorporating the well-known works of Kirkwood, Onsager, and Asakura and Oosawa. PMID:25344532

  15. Understanding shape entropy through local dense packing

    DOE PAGES

    van Anders, Greg; Klotsa, Daphne; Ahmed, N. Khalid; ...

    2014-10-24

    Entropy drives the phase behavior of colloids ranging from dense suspensions of hard spheres or rods to dilute suspensions of hard spheres and depletants. Entropic ordering of anisotropic shapes into complex crystals, liquid crystals, and even quasicrystals was demonstrated recently in computer simulations and experiments. The ordering of shapes appears to arise from the emergence of directional entropic forces (DEFs) that align neighboring particles, but these forces have been neither rigorously defined nor quantified in generic systems. In this paper, we show quantitatively that shape drives the phase behavior of systems of anisotropic particles upon crowding through DEFs. We definemore » DEFs in generic systems and compute them for several hard particle systems. We show they are on the order of a few times the thermal energy (kBT) at the onset of ordering, placing DEFs on par with traditional depletion, van der Waals, and other intrinsic interactions. In experimental systems with these other interactions, we provide direct quantitative evidence that entropic effects of shape also contribute to self-assembly. We use DEFs to draw a distinction between self-assembly and packing behavior. We show that the mechanism that generates directional entropic forces is the maximization of entropy by optimizing local particle packing. Finally, we show that this mechanism occurs in a wide class of systems and we treat, in a unified way, the entropy-driven phase behavior of arbitrary shapes, incorporating the well-known works of Kirkwood, Onsager, and Asakura and Oosawa.« less

  16. Dense ceramic membranes for methane conversion

    SciTech Connect

    Balachandran, U.; Mieville, R.L.; Ma, B.; Udovich, C.A.

    1996-05-01

    This report focuses on a mechanism for oxygen transport through mixed- oxide conductors as used in dense ceramic membrane reactors for the partial oxidation of methane to syngas (CO and H{sub 2}). The in-situ separation of O{sub 2} from air by the membrane reactor saves the costly cryogenic separation step that is required in conventional syngas production. The mixed oxide of choice is SrCo{sub 0.5}FeO{sub x}, which exhibits high oxygen permeability and has been shown in previous studies to possess high stability in both oxidizing and reducing conditions; in addition, it can be readily formed into reactor configurations such as tubes. An understanding of the electrical properties and the defect dynamics in this material is essential and will help us to find the optimal operating conditions for the conversion reactor. In this paper, we discuss the conductivities of the SrFeCo{sub 0.5}O{sub x} system that are dependent on temperature and partial pressure of oxygen. Based on the experimental results, a defect model is proposed to explain the electrical properties of this system. The oxygen permeability of SrFeCo{sub 0.5}O{sub x} is estimated by using conductivity data and is compared with that obtained from methane conversion reaction.

  17. Understanding shape entropy through local dense packing

    SciTech Connect

    van Anders, Greg; Klotsa, Daphne; Ahmed, N. Khalid; Engel, Michael; Glotzer, Sharon C.

    2014-10-24

    Entropy drives the phase behavior of colloids ranging from dense suspensions of hard spheres or rods to dilute suspensions of hard spheres and depletants. Entropic ordering of anisotropic shapes into complex crystals, liquid crystals, and even quasicrystals was demonstrated recently in computer simulations and experiments. The ordering of shapes appears to arise from the emergence of directional entropic forces (DEFs) that align neighboring particles, but these forces have been neither rigorously defined nor quantified in generic systems. In this paper, we show quantitatively that shape drives the phase behavior of systems of anisotropic particles upon crowding through DEFs. We define DEFs in generic systems and compute them for several hard particle systems. We show they are on the order of a few times the thermal energy (kBT) at the onset of ordering, placing DEFs on par with traditional depletion, van der Waals, and other intrinsic interactions. In experimental systems with these other interactions, we provide direct quantitative evidence that entropic effects of shape also contribute to self-assembly. We use DEFs to draw a distinction between self-assembly and packing behavior. We show that the mechanism that generates directional entropic forces is the maximization of entropy by optimizing local particle packing. Finally, we show that this mechanism occurs in a wide class of systems and we treat, in a unified way, the entropy-driven phase behavior of arbitrary shapes, incorporating the well-known works of Kirkwood, Onsager, and Asakura and Oosawa.

  18. Dense packings of the Platonic and Archimedean solids.

    PubMed

    Torquato, S; Jiao, Y

    2009-08-13

    Dense particle packings have served as useful models of the structures of liquid, glassy and crystalline states of matter, granular media, heterogeneous materials and biological systems. Probing the symmetries and other mathematical properties of the densest packings is a problem of interest in discrete geometry and number theory. Previous work has focused mainly on spherical particles-very little is known about dense polyhedral packings. Here we formulate the generation of dense packings of polyhedra as an optimization problem, using an adaptive fundamental cell subject to periodic boundary conditions (we term this the 'adaptive shrinking cell' scheme). Using a variety of multi-particle initial configurations, we find the densest known packings of the four non-tiling Platonic solids (the tetrahedron, octahedron, dodecahedron and icosahedron) in three-dimensional Euclidean space. The densities are 0.782..., 0.947..., 0.904... and 0.836..., respectively. Unlike the densest tetrahedral packing, which must not be a Bravais lattice packing, the densest packings of the other non-tiling Platonic solids that we obtain are their previously known optimal (Bravais) lattice packings. Combining our simulation results with derived rigorous upper bounds and theoretical arguments leads us to the conjecture that the densest packings of the Platonic and Archimedean solids with central symmetry are given by their corresponding densest lattice packings. This is the analogue of Kepler's sphere conjecture for these solids.

  19. Nature's Code

    NASA Astrophysics Data System (ADS)

    Hill, Vanessa J.; Rowlands, Peter

    2008-10-01

    We propose that the mathematical structures related to the `universal rewrite system' define a universal process applicable to Nature, which we may describe as `Nature's code'. We draw attention here to such concepts as 4 basic units, 64- and 20-unit structures, symmetry-breaking and 5-fold symmetry, chirality, double 3-dimensionality, the double helix, the Van der Waals force and the harmonic oscillator mechanism, and our explanation of how they necessarily lead to self-aggregation, complexity and emergence in higher-order systems. Biological concepts, such as translation, transcription, replication, the genetic code and the grouping of amino acids appear to be driven by fundamental processes of this kind, and it would seem that the Platonic solids, pentagonal symmetry and Fibonacci numbers have significant roles in organizing `Nature's code'.

  20. Show Code.

    PubMed

    Shalev, Daniel

    2017-01-01

    "Let's get one thing straight: there is no such thing as a show code," my attending asserted, pausing for effect. "You either try to resuscitate, or you don't. None of this halfway junk." He spoke so loudly that the two off-service consultants huddled at computers at the end of the unit looked up… We did four rounds of compressions and pushed epinephrine twice. It was not a long code. We did good, strong compressions and coded this man in earnest until the end. Toward the final round, though, as I stepped up to do compressions, my attending looked at me in a deep way. It was a look in between willing me as some object under his command and revealing to me everything that lay within his brash, confident surface but could not be spoken. © 2017 The Hastings Center.

  1. Gear optimization

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.; Chen, Xiang; Zhang, Ning-Tian

    1988-01-01

    The use of formal numerical optimization methods for the design of gears is investigated. To achieve this, computer codes were developed for the analysis of spur gears and spiral bevel gears. These codes calculate the life, dynamic load, bending strength, surface durability, gear weight and size, and various geometric parameters. It is necessary to calculate all such important responses because they all represent competing requirements in the design process. The codes developed here were written in subroutine form and coupled to the COPES/ADS general purpose optimization program. This code allows the user to define the optimization problem at the time of program execution. Typical design variables include face width, number of teeth and diametral pitch. The user is free to choose any calculated response as the design objective to minimize or maximize and may impose lower and upper bounds on any calculated responses. Typical examples include life maximization with limits on dynamic load, stress, weight, etc. or minimization of weight subject to limits on life, dynamic load, etc. The research codes were written in modular form for easy expansion and so that they could be combined to create a multiple reduction optimization capability in future.

  2. Percolation in dense storage arrays

    NASA Astrophysics Data System (ADS)

    Kirkpatrick, Scott; Wilcke, Winfried W.; Garner, Robert B.; Huels, Harald

    2002-11-01

    As computers and their accessories become smaller, cheaper, and faster the providers of news, retail sales, and other services we now take for granted on the Internet have met their increasing computing needs by putting more and more computers, hard disks, power supplies, and the data communications linking them to each other and to the rest of the wired world into ever smaller spaces. This has created a new and quite interesting percolation problem. It is no longer desirable to fix computers, storage or switchgear which fail in such a dense array. Attempts to repair things are all too likely to make problems worse. The alternative approach, letting units “fail in place”, be removed from service and routed around, means that a data communications environment will evolve with an underlying regular structure but a very high density of missing pieces. Some of the properties of this kind of network can be described within the existing paradigm of site or bond percolation on lattices, but other important questions have not been explored. I will discuss 3D arrays of hundreds to thousands of storage servers (something which it is quite feasible to build in the next few years), and show that bandwidth, but not percolation fraction or shortest path lengths, is the critical factor affected by the “fail in place” disorder. Redundancy strategies traditionally employed in storage systems may have to be revised. Novel approaches to routing information among the servers have been developed to minimize the impact.

  3. Approximate hard-sphere method for densely packed granular flows

    NASA Astrophysics Data System (ADS)

    Guttenberg, Nicholas

    2011-05-01

    The simulation of granular media is usually done either with event-driven codes that treat collisions as instantaneous but have difficulty with very dense packings, or with molecular dynamics (MD) methods that approximate rigid grains using a stiff viscoelastic spring. There is a little-known method that combines several collision events into a single timestep to retain the instantaneous collisions of event-driven dynamics, but also be able to handle dense packings. However, it is poorly characterized as to its regime of validity and failure modes. We present a modification of this method to reduce the introduction of overlap error, and test it using the problem of two-dimensional (2D) granular Couette flow, a densely packed system that has been well characterized by previous work. We find that this method can successfully replicate the results of previous work up to the point of jamming, and that it can do so a factor of 10 faster than comparable MD methods.

  4. Approximate hard-sphere method for densely packed granular flows.

    PubMed

    Guttenberg, Nicholas

    2011-05-01

    The simulation of granular media is usually done either with event-driven codes that treat collisions as instantaneous but have difficulty with very dense packings, or with molecular dynamics (MD) methods that approximate rigid grains using a stiff viscoelastic spring. There is a little-known method that combines several collision events into a single timestep to retain the instantaneous collisions of event-driven dynamics, but also be able to handle dense packings. However, it is poorly characterized as to its regime of validity and failure modes. We present a modification of this method to reduce the introduction of overlap error, and test it using the problem of two-dimensional (2D) granular Couette flow, a densely packed system that has been well characterized by previous work. We find that this method can successfully replicate the results of previous work up to the point of jamming, and that it can do so a factor of 10 faster than comparable MD methods.

  5. Error coding simulations

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1993-01-01

    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.

  6. QR Codes

    ERIC Educational Resources Information Center

    Lai, Hsin-Chih; Chang, Chun-Yen; Li, Wen-Shiane; Fan, Yu-Lin; Wu, Ying-Tien

    2013-01-01

    This study presents an m-learning method that incorporates Integrated Quick Response (QR) codes. This learning method not only achieves the objectives of outdoor education, but it also increases applications of Cognitive Theory of Multimedia Learning (CTML) (Mayer, 2001) in m-learning for practical use in a diverse range of outdoor locations. When…

  7. QR Codes

    ERIC Educational Resources Information Center

    Lai, Hsin-Chih; Chang, Chun-Yen; Li, Wen-Shiane; Fan, Yu-Lin; Wu, Ying-Tien

    2013-01-01

    This study presents an m-learning method that incorporates Integrated Quick Response (QR) codes. This learning method not only achieves the objectives of outdoor education, but it also increases applications of Cognitive Theory of Multimedia Learning (CTML) (Mayer, 2001) in m-learning for practical use in a diverse range of outdoor locations. When…

  8. Uplink Coding

    NASA Technical Reports Server (NTRS)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the objectives, meeting goals and overall NASA goals for the NASA Data Standards Working Group. The presentation includes information on the technical progress surrounding the objective, short LDPC codes, and the general results on the Pu-Pw tradeoff.

  9. Optimized speech understanding with the continuous interleaved sampling speech coding strategy in patients with cochlear implants: effect of variations in stimulation rate and number of channels.

    PubMed

    Kiefer, J; von Ilberg, C; Rupprecht, V; Hubner-Egner, J; Knecht, R

    2000-11-01

    The purpose of this study was to investigate the effect of systematic variations in stimulation rate and number of channels on speech understanding in 13 patients with cochlear implants who used the continuous interleaved sampling speech coding strategy. Reducing the stimulation rate from 1,515 to 1,730 pulses per second per channel to 600 pulses per second per channel resulted in decreased overall performance; the understanding of monosyllables and consonants was more affected than the understanding of vowels. Reducing the number of active channels below 7 or 8 channels decreased speech understanding; the identification of vowels and monosyllables was most affected. We conclude that vowel recognition with the continuous interleaved sampling strategy relies on spectral cues more than on temporal cues, increasing with the number of active channels, whereas consonant recognition is more dependent on temporal cues and stimulation rate.

  10. Dense packings of polyhedra: Platonic and Archimedean solids.

    PubMed

    Torquato, S; Jiao, Y

    2009-10-01

    Understanding the nature of dense particle packings is a subject of intense research in the physical, mathematical, and biological sciences. The preponderance of previous work has focused on spherical particles and very little is known about dense polyhedral packings. We formulate the problem of generating dense packings of nonoverlapping, nontiling polyhedra within an adaptive fundamental cell subject to periodic boundary conditions as an optimization problem, which we call the adaptive shrinking cell (ASC) scheme. This optimization problem is solved here (using a variety of multiparticle initial configurations) to find the dense packings of each of the Platonic solids in three-dimensional Euclidean space R3 , except for the cube, which is the only Platonic solid that tiles space. We find the densest known packings of tetrahedra, icosahedra, dodecahedra, and octahedra with densities 0.823..., 0.836..., 0.904..., and 0.947..., respectively. It is noteworthy that the densest tetrahedral packing possesses no long-range order. Unlike the densest tetrahedral packing, which must not be a Bravais lattice packing, the densest packings of the other nontiling Platonic solids that we obtain are their previously known optimal (Bravais) lattice packings. We also derive a simple upper bound on the maximal density of packings of congruent nonspherical particles and apply it to Platonic solids, Archimedean solids, superballs, and ellipsoids. Provided that what we term the "asphericity" (ratio of the circumradius to inradius) is sufficiently small, the upper bounds are relatively tight and thus close to the corresponding densities of the optimal lattice packings of the centrally symmetric Platonic and Archimedean solids. Our simulation results, rigorous upper bounds, and other theoretical arguments lead us to the conjecture that the densest packings of Platonic and Archimedean solids with central symmetry are given by their corresponding densest lattice packings. This can be

  11. Robust coding over noisy overcomplete channels.

    PubMed

    Doi, Eizaburo; Balcan, Doru C; Lewicki, Michael S

    2007-02-01

    We address the problem of robust coding in which the signal information should be preserved in spite of intrinsic noise in the representation. We present a theoretical analysis for 1- and 2-D cases and characterize the optimal linear encoder and decoder in the mean-squared error sense. Our analysis allows for an arbitrary number of coding units, thus including both under- and over-complete representations, and provides insights into optimal coding strategies. In particular, we show how the form of the code adapts to the number of coding units and to different data and noise conditions in order to achieve robustness. We also present numerical solutions of robust coding for high-dimensional image data, demonstrating that these codes are substantially more robust than other linear image coding methods such as PCA, ICA, and wavelets.

  12. Combined trellis coding with asymmetric modulations

    NASA Technical Reports Server (NTRS)

    Divsalar, D.; Simon, M. K.

    1985-01-01

    The use of asymmetric signal constellations combined with optimized trellis coding to improve the performance of coded systems without increasing the average or peak power, or changing the bandwidth constraints of a system is discussed. The trellis code, asymmetric signal set, and Viterbi decoder of the system model are examined. The procedures for assigning signals to state transitions of the trellis code are described; the performance of the trellis coding system is evaluated. Examples of AM, QAM, and MPSK modulations with short memory trellis codes are presented.

  13. High-speed code validation

    NASA Technical Reports Server (NTRS)

    Barnwell, Richard W.; Rogers, R. Clayton; Pittman, James L.; Dwoyer, Douglas L.

    1987-01-01

    The topics are presented in viewgraph form and include the following: NFL body experiment; high-speed validation problems; 3-D Euler/Navier-Stokes inlet code; two-strut inlet configuration; pressure contours in two longitudinal planes; sidewall pressure distribution; pressure distribution on strut inner surface; inlet/forebody tests in 60 inch helium tunnel; pressure distributions on elliptical missile; code validations; small scale test apparatus; CARS nonintrusive measurements; optimized cone-derived waverider study; etc.

  14. Efficient calculation of atomic rate coefficients in dense plasmas

    NASA Astrophysics Data System (ADS)

    Aslanyan, Valentin; Tallents, Greg J.

    2017-03-01

    Modelling electron statistics in a cold, dense plasma by the Fermi-Dirac distribution leads to complications in the calculations of atomic rate coefficients. The Pauli exclusion principle slows down the rate of collisions as electrons must find unoccupied quantum states and adds a further computational cost. Methods to calculate these coefficients by direct numerical integration with a high degree of parallelism are presented. This degree of optimization allows the effects of degeneracy to be incorporated into a time-dependent collisional-radiative model. Example results from such a model are presented.

  15. High accuracy and visibility-consistent dense multiview stereo.

    PubMed

    Vu, Hoang-Hiep; Labatut, Patrick; Pons, Jean-Philippe; Keriven, Renaud

    2012-05-01

    Since the initial comparison of Seitz et al., the accuracy of dense multiview stereovision methods has been increasing steadily. A number of limitations, however, make most of these methods not suitable to outdoor scenes taken under uncontrolled imaging conditions. The present work consists of a complete dense multiview stereo pipeline which circumvents these limitations, being able to handle large-scale scenes without sacrificing accuracy. Highly detailed reconstructions are produced within very reasonable time thanks to two key stages in our pipeline: a minimum s-t cut optimization over an adaptive domain that robustly and efficiently filters a quasidense point cloud from outliers and reconstructs an initial surface by integrating visibility constraints, followed by a mesh-based variational refinement that captures small details, smartly handling photo-consistency, regularization, and adaptive resolution. The pipeline has been tested over a wide range of scenes: from classic compact objects taken in a laboratory setting, to outdoor architectural scenes, landscapes, and cultural heritage sites. The accuracy of its reconstructions has also been measured on the dense multiview benchmark proposed by Strecha et al., showing the results to compare more than favorably with the current state-of-the-art methods.

  16. Propagation Of Dense Plasma Jets

    NASA Astrophysics Data System (ADS)

    Turchi, Peter J.; Davis, John F.

    1988-05-01

    A variety of schemes have been proposed over the last two decades for delivering lethal amounts of energy and/or momentum to targets such as missiles and high speed aircraft. Techniques have ranged from high energy lasers and high voltage charged-particle accelerators to less exotic but still challenging devices such as electromagnetic railguns. One class of technology involves the use of high speed plasmas. The primary attraction of such technology is the possibility of utilizing relatively compact accelerators and electrical power systems that could allow highly mobile and agile operation from rocket or aircraft platforms, or in special ordnance. Three years ago, R & D Associates examined the possibility of plasma propagation for military applications and concluded that the only viable approach consisted of long dense plasma jets, contained in radial equilibrium by the atmosphere, while propagating at speeds of about 10 km/s. Without atmospheric confinement the plasma density would diminish too rapidly for adequate range and lethality. Propagation of atmospherically-confined jets at speeds much greater than 10 km/s required significant increases in power levels and/or operating altitudes to achieve useful ranges. The present research effort has been developing the experimental conditions necessary to achieve reasonable comparison with theoretical predictions for plasma jet propagation in the atmosphere. Time-resolved measurements have been made of high speed argon plasma jets penetrating a helium background (simulating xenon jets propagating into air). Basic radial confinement of the jet has been observed by photography and spectroscopy and structures in the flow field resemble those predicted by numerical calculations. Results from our successful initial experiments have been used to design improved diagnostic procedures and arcjet source characteristics for further experiments. In experiments with a modified arcjet source, radial confinement of the jet is again

  17. ETR/ITER systems code

    SciTech Connect

    Barr, W.L.; Bathke, C.G.; Brooks, J.N.; Bulmer, R.H.; Busigin, A.; DuBois, P.F.; Fenstermacher, M.E.; Fink, J.; Finn, P.A.; Galambos, J.D.; Gohar, Y.; Gorker, G.E.; Haines, J.R.; Hassanein, A.M.; Hicks, D.R.; Ho, S.K.; Kalsi, S.S.; Kalyanam, K.M.; Kerns, J.A.; Lee, J.D.; Miller, J.R.; Miller, R.L.; Myall, J.O.; Peng, Y-K.M.; Perkins, L.J.; Spampinato, P.T.; Strickler, D.J.; Thomson, S.L.; Wagner, C.E.; Willms, R.S.; Reid, R.L.

    1988-04-01

    A tokamak systems code capable of modeling experimental test reactors has been developed and is described in this document. The code, named TETRA (for Tokamak Engineering Test Reactor Analysis), consists of a series of modules, each describing a tokamak system or component, controlled by an optimizer/driver. This code development was a national effort in that the modules were contributed by members of the fusion community and integrated into a code by the Fusion Engineering Design Center. The code has been checked out on the Cray computers at the National Magnetic Fusion Energy Computing Center and has satisfactorily simulated the Tokamak Ignition/Burn Experimental Reactor II (TIBER) design. A feature of this code is the ability to perform optimization studies through the use of a numerical software package, which iterates prescribed variables to satisfy a set of prescribed equations or constraints. This code will be used to perform sensitivity studies for the proposed International Thermonuclear Experimental Reactor (ITER). 22 figs., 29 tabs.

  18. Performance Assessment of Model-Based Optimal Feedforward and Feedback Current Profile Control in NSTX-U using the TRANSP Code

    NASA Astrophysics Data System (ADS)

    Ilhan, Z.; Wehner, W. P.; Schuster, E.; Boyer, M. D.; Gates, D. A.; Gerhardt, S.; Menard, J.

    2015-11-01

    Active control of the toroidal current density profile is crucial to achieve and maintain high-performance, MHD-stable plasma operation in NSTX-U. A first-principles-driven, control-oriented model describing the temporal evolution of the current profile has been proposed earlier by combining the magnetic diffusion equation with empirical correlations obtained at NSTX-U for the electron density, electron temperature, and non-inductive current drives. A feedforward + feedback control scheme for the requlation of the current profile is constructed by embedding the proposed nonlinear, physics-based model into the control design process. Firstly, nonlinear optimization techniques are used to design feedforward actuator trajectories that steer the plasma to a desired operating state with the objective of supporting the traditional trial-and-error experimental process of advanced scenario planning. Secondly, a feedback control algorithm to track a desired current profile evolution is developed with the goal of adding robustness to the overall control scheme. The effectiveness of the combined feedforward + feedback control algorithm for current profile regulation is tested in predictive simulations carried out in TRANSP. Supported by PPPL.

  19. HERCULES: A Pattern Driven Code Transformation System

    SciTech Connect

    Kartsaklis, Christos; Hernandez, Oscar R; Hsu, Chung-Hsing; Ilsche, Thomas; Joubert, Wayne; Graham, Richard L

    2012-01-01

    New parallel computers are emerging, but developing efficient scientific code for them remains difficult. A scientist must manage not only the science-domain complexity but also the performance-optimization complexity. HERCULES is a code transformation system designed to help the scientist to separate the two concerns, which improves code maintenance, and facilitates performance optimization. The system combines three technologies, code patterns, transformation scripts and compiler plugins, to provide the scientist with an environment to quickly implement code transformations that suit his needs. Unlike existing code optimization tools, HERCULES is unique in its focus on user-level accessibility. In this paper we discuss the design, implementation and an initial evaluation of HERCULES.

  20. Mycobacterial RNA isolation optimized for non-coding RNA: high fidelity isolation of 5S rRNA from Mycobacterium bovis BCG reveals novel post-transcriptional processing and a complete spectrum of modified ribonucleosides

    PubMed Central

    Hia, Fabian; Chionh, Yok Hian; Pang, Yan Ling Joy; DeMott, Michael S.; McBee, Megan E.; Dedon, Peter C.

    2015-01-01

    A major challenge in the study of mycobacterial RNA biology is the lack of a comprehensive RNA isolation method that overcomes the unusual cell wall to faithfully yield the full spectrum of non-coding RNA (ncRNA) species. Here, we describe a simple and robust procedure optimized for the isolation of total ncRNA, including 5S, 16S and 23S ribosomal RNA (rRNA) and tRNA, from mycobacteria, using Mycobacterium bovis BCG to illustrate the method. Based on a combination of mechanical disruption and liquid and solid-phase technologies, the method produces all major species of ncRNA in high yield and with high integrity, enabling direct chemical and sequence analysis of the ncRNA species. The reproducibility of the method with BCG was evident in bioanalyzer electrophoretic analysis of isolated RNA, which revealed quantitatively significant differences in the ncRNA profiles of exponentially growing and non-replicating hypoxic bacilli. The method also overcame an historical inconsistency in 5S rRNA isolation, with direct sequencing revealing a novel post-transcriptional processing of 5S rRNA to its functional form and with chemical analysis revealing seven post-transcriptional ribonucleoside modifications in the 5S rRNA. This optimized RNA isolation procedure thus provides a means to more rigorously explore the biology of ncRNA species in mycobacteria. PMID:25539917

  1. Mycobacterial RNA isolation optimized for non-coding RNA: high fidelity isolation of 5S rRNA from Mycobacterium bovis BCG reveals novel post-transcriptional processing and a complete spectrum of modified ribonucleosides.

    PubMed

    Hia, Fabian; Chionh, Yok Hian; Pang, Yan Ling Joy; DeMott, Michael S; McBee, Megan E; Dedon, Peter C

    2015-03-11

    A major challenge in the study of mycobacterial RNA biology is the lack of a comprehensive RNA isolation method that overcomes the unusual cell wall to faithfully yield the full spectrum of non-coding RNA (ncRNA) species. Here, we describe a simple and robust procedure optimized for the isolation of total ncRNA, including 5S, 16S and 23S ribosomal RNA (rRNA) and tRNA, from mycobacteria, using Mycobacterium bovis BCG to illustrate the method. Based on a combination of mechanical disruption and liquid and solid-phase technologies, the method produces all major species of ncRNA in high yield and with high integrity, enabling direct chemical and sequence analysis of the ncRNA species. The reproducibility of the method with BCG was evident in bioanalyzer electrophoretic analysis of isolated RNA, which revealed quantitatively significant differences in the ncRNA profiles of exponentially growing and non-replicating hypoxic bacilli. The method also overcame an historical inconsistency in 5S rRNA isolation, with direct sequencing revealing a novel post-transcriptional processing of 5S rRNA to its functional form and with chemical analysis revealing seven post-transcriptional ribonucleoside modifications in the 5S rRNA. This optimized RNA isolation procedure thus provides a means to more rigorously explore the biology of ncRNA species in mycobacteria. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  2. Dense Plasma Heating and Radiation Generation.

    DTIC Science & Technology

    The investigations under this grant consist of three parts: CO2 laser heating of dense preformed plasmas, interaction of a dense hot plasma with a...small solid pellet, and pulsed power systems and technology. The laser plasma heating experiment has demonstrated both beam guiding by the plasma and...plasma heating by the beam. These results will be useful in heating plasmas for radiation generation. Experiments have shown that the pellet-plasma

  3. Magnetic Phases in Dense Quark Matter

    SciTech Connect

    Incera, Vivian de la

    2007-10-26

    In this paper I discuss the magnetic phases of the three-flavor color superconductor. These phases can take place at different field strengths in a highly dense quark system. Given that the best natural candidates for the realization of color superconductivity are the extremely dense cores of neutron stars, which typically have very large magnetic fields, the magnetic phases here discussed could have implications for the physics of these compact objects.

  4. Dynamical theory of dense groups of galaxies

    NASA Technical Reports Server (NTRS)

    Mamon, Gary A.

    1990-01-01

    It is well known that galaxies associate in groups and clusters. Perhaps 40% of all galaxies are found in groups of 4 to 20 galaxies (e.g., Tully 1987). Although most groups appear to be so loose that the galaxy interactions within them ought to be insignificant, the apparently densest groups, known as compact groups appear so dense when seen in projection onto the plane of the sky that their members often overlap. These groups thus appear as dense as the cores of rich clusters. The most popular catalog of compact groups, compiled by Hickson (1982), includes isolation among its selection critera. Therefore, in comparison with the cores of rich clusters, Hickson's compact groups (HCGs) appear to be the densest isolated regions in the Universe (in galaxies per unit volume), and thus provide in principle a clean laboratory for studying the competition of very strong gravitational interactions. The $64,000 question here is then: Are compact groups really bound systems as dense as they appear? If dense groups indeed exist, then one expects that each of the dynamical processes leading to the interaction of their member galaxies should be greatly enhanced. This leads us to the questions: How stable are dense groups? How do they form? And the related question, fascinating to any theorist: What dynamical processes predominate in dense groups of galaxies? If HCGs are not bound dense systems, but instead 1D change alignments (Mamon 1986, 1987; Walke & Mamon 1989) or 3D transient cores (Rose 1979) within larger looser systems of galaxies, then the relevant question is: How frequent are chance configurations within loose groups? Here, the author answers these last four questions after comparing in some detail the methods used and the results obtained in the different studies of dense groups.

  5. Dissociation energy of molecules in dense gases

    NASA Technical Reports Server (NTRS)

    Kunc, J. A.

    1992-01-01

    A general approach is presented for calculating the reduction of the dissociation energy of diatomic molecules immersed in a dense (n = less than 10 exp 22/cu cm) gas of molecules and atoms. The dissociation energy of a molecule in a dense gas differs from that of the molecule in vacuum because the intermolecular forces change the intramolecular dynamics of the molecule, and, consequently, the energy of the molecular bond.

  6. Dissociation energy of molecules in dense gases

    NASA Technical Reports Server (NTRS)

    Kunc, J. A.

    1992-01-01

    A general approach is presented for calculating the reduction of the dissociation energy of diatomic molecules immersed in a dense (n = less than 10 exp 22/cu cm) gas of molecules and atoms. The dissociation energy of a molecule in a dense gas differs from that of the molecule in vacuum because the intermolecular forces change the intramolecular dynamics of the molecule, and, consequently, the energy of the molecular bond.

  7. The Dense Gas in M82

    NASA Astrophysics Data System (ADS)

    Salas, P.; Galaz, G.; Salter, D.; Bolatto, A.; Herrera-Camus, R.

    2014-10-01

    Galactic winds are responsible of carrying energy and matter from the inner regions of galaxies to the outer regions, even reaching the intergalactic medium. This process removes gas from the inner regions, the available material to form stars. How and in which amount these winds remove gas from galaxies plays an important role in galaxy evolution. To study this effect we have obtained 3 mm maps of dense gas (n_{{crit}}>10^{4} cm^{-3}) in the central region of the starburst galaxy M82. We detect line emission from the dense molecular gas tracers HCN, HCO^{+}, HNC, CS, HC_{3}N and C_{6}H. Our maps reveal a considerable amount of HCO^{+} emission extending above and bellow the central star-forming disk, indicating that the dense gas is entangled in the outflow. The mass of molecular Hydrogen outside the central starburst is M_{{out}}≍ 3 ± 1× 10^{6} M_{odot}, while in the central starburst is M_{{disk}}≍ 8 ± 2× 10^{6} M_{odot}. These maps also show variations of the amount of dense gas over the starburst disk, revealing that the gas is more concentrated towards the center of the starburst and less towards the edges. It is the average amount of dense gas what drives the observed star formation law between dense gas and star formation rate on galactic scales.

  8. METHOD OF PRODUCING DENSE CONSOLIDATED METALLIC REGULUS

    DOEpatents

    Magel, T.T.

    1959-08-11

    A methcd is presented for reducing dense metal compositions while simultaneously separating impurities from the reduced dense metal and casting the reduced parified dense metal, such as uranium, into well consolidated metal ingots. The reduction is accomplished by heating the dense metallic salt in the presence of a reducing agent, such as an alkali metal or alkaline earth metal in a bomb type reacting chamber, while applying centrifugal force on the reacting materials. Separation of the metal from the impurities is accomplished essentially by the incorporation of a constricted passageway at the vertex of a conical reacting chamber which is in direct communication with a collecting chamber. When a centrifugal force is applled to the molten metal and slag from the reduction in a direction collinear with the axis of the constricted passage, the dense molten metal is forced therethrough while the less dense slag is retained within the reaction chamber, resulting in a simultaneous separation of the reduced molten metal from the slag and a compacting of the reduced metal in a homogeneous mass.

  9. Computational experience with a dense column feature for interior-point methods

    SciTech Connect

    Wenzel, M.; Czyzyk, J.; Wright, S.

    1997-08-01

    Most software that implements interior-point methods for linear programming formulates the linear algebra at each iteration as a system of normal equations. This approach can be extremely inefficient when the constraint matrix has dense columns, because the density of the normal equations matrix is much greater than the constraint matrix and the system is expensive to solve. In this report the authors describe a more efficient approach for this case, that involves handling the dense columns by using a Schur-complement method and conjugate gradient interaction. The authors report numerical results with the code PCx, into which the technique now has been incorporated.

  10. Evolution of Dense Gas with Starburst Age: When Star Formation Versus Dense Gas Relations Break Down

    NASA Astrophysics Data System (ADS)

    Meier, David S.; Turner, J. L.; Schinnerer, E.

    2011-05-01

    Dense gas correlates well with star formation on kpc scales. On smaller scales, motions of individual clouds become comparable to the 100 Myr ages of starbursts. One then expects the star formation rate vs. dense gas relations to break down on giant molecular cloud scales. We exploit this to study the evolutionary history of nuclear starburst in the nearby spiral, IC 342. Maps of the J=5-4 and 16-15 transitions of dense gas tracer HC3N at 20 pc resolution made with the VLA and the Plateau de Bure interferometer are presented. The 5-4 line of HC3N traces very dense gas in the cold phase, while the 16-15 transition traces warm, dense gas. These reveal changes in dense cloud structure on scales of 30 pc among clouds with star formation histories differing by only a few Myrs. HC3N emission does not correlate well with young star formation at these high spatial resolutions, but gas excitation does. The cold, dense gas extends well beyond the starburst region implying large amounts of dense quiescent gas not yet actively forming stars. Close to the starburst the high excitation combined with faint emission indicates that the immediate (30 pc) vicinity of the starburst lacks large masses of very dense gas and has high dense gas star formation efficiencies. The dense gas appears to be in pressure equilibrium with the starburst. We propose a scenario where the starburst is being caught in the act of dispersing or destroying the dense gas in the presence of the expanding HII region. This work is supported by the NSF through NRAO and grant AST-1009620.

  11. MHD modeling of dense plasma focus electrode shape variation

    NASA Astrophysics Data System (ADS)

    McLean, Harry; Hartman, Charles; Schmidt, Andrea; Tang, Vincent; Link, Anthony; Ellsworth, Jen; Reisman, David

    2013-10-01

    The dense plasma focus (DPF) is a very simple device physically, but results to date indicate that very extensive physics is needed to understand the details of operation, especially during the final pinch where kinetic effects become very important. Nevertheless, the overall effects of electrode geometry, electrode size, and drive circuit parameters can be informed efficiently using MHD fluid codes, especially in the run-down phase before the final pinch. These kinds of results can then guide subsequent, more detailed fully kinetic modeling efforts. We report on resistive 2-d MHD modeling results applying the TRAC-II code to the DPF with an emphasis on varying anode and cathode shape. Drive circuit variations are handled in the code using a self-consistent circuit model for the external capacitor bank since the device impedance is strongly coupled to the internal plasma physics. Electrode shape is characterized by the ratio of inner diameter to outer diameter, length to diameter, and various parameterizations for tapering. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  12. Error coding simulations in C

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1994-01-01

    When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.

  13. Formation and evolution of black holes in dense star clusters

    NASA Astrophysics Data System (ADS)

    Goswami, Sanghamitra

    Using supercomputer simulations combining stellar dynamics and stellar evolution, we have studied various problems related to the existence of black holes in dense star clusters. We consider both stellar and intermediate-mass black holes, and we focus on massive, dense star clusters, such as old globular clusters and young, so called "super star clusters." The first problem concerns the formation of intermediate-mass black holes in young clusters through the runaway collision instability. A promising mechanism to form intermediate-mass black holes (IMBHs) is runaway mergers in dense star clusters, where main-sequence stars collide re- peatedly and form a very massive star (VMS), which then collapses to a black hole (BH). Here we study the effects of primordial mass segregation and the importance of the stellar initial mass function (IMF) on the runaway growth of VMSs using a dynamical Monte Carlo code to model systems with N as high as 10^6 stars. Our Monte Carlo code includes an explicittreatment of all stellar collisions. We place special emphasis on the possibility of top-heavy IMFs, as observed in some very young massive clusters. We find that both primordial mass segregation and the shape of the IMF affect the rate of core collapse of star clusters and thus the time of the runaway. When we include primordial mass segregation we generally see a decrease in core collapse time (tcc). Although for smaller degrees of primordial mass segregation this decrease in tcc is mostly due to the change in the density profile of the cluster, for highly mass-segregated (primordial) clusters, it is the increase in the average mass in the core which reduces the central relaxation time, decreasing tcc. Finally, flatter IMFs generally increase the average mass in the whole cluster, which increases tcc. For the range of IMFs investigated in this thesis, this increase in tcc is to some degree balanced by stellar collisions, which accelerate core collapse. Thus there is no

  14. A secure and efficient entropy coding based on arithmetic coding

    NASA Astrophysics Data System (ADS)

    Li, Hengjian; Zhang, Jiashu

    2009-12-01

    A novel security arithmetic coding scheme based on nonlinear dynamic filter (NDF) with changeable coefficients is proposed in this paper. The NDF is employed to generate the pseudorandom number generator (NDF-PRNG) and its coefficients are derived from the plaintext for higher security. During the encryption process, the mapping interval in each iteration of arithmetic coding (AC) is decided by both the plaintext and the initial values of NDF, and the data compression is also achieved with entropy optimality simultaneously. And this modification of arithmetic coding methodology which also provides security is easy to be expanded into the most international image and video standards as the last entropy coding stage without changing the existing framework. Theoretic analysis and numerical simulations both on static and adaptive model show that the proposed encryption algorithm satisfies highly security without loss of compression efficiency respect to a standard AC or computation burden.

  15. Accessibillity of Electron Bernstein Modes in Over-Dense Plasma

    SciTech Connect

    Batchelor, D.B.; Bigelow, T.S.; Carter, M.D.

    1999-04-12

    Mode-conversion between the ordinary, extraordinary and electron Bernstein modes near the plasma edge may allow signals generated by electrons in an over-dense plasma to be detected. Alternatively, high frequency power may gain accessibility to the core plasma through this mode conversion process. Many of the tools used for ion cyclotron antenna de-sign can also be applied near the electron cyclotron frequency. In this paper, we investigate the possibilities for an antenna that may couple to electron Bernstein modes inside an over-dense plasma. The optimum values for wavelengths that undergo mode-conversion are found by scanning the poloidal and toroidal response of the plasma using a warm plasma slab approximation with a sheared magnetic field. Only a very narrow region of the edge can be examined in this manner; however, ray tracing may be used to follow the mode converted power in a more general geometry. It is eventually hoped that the methods can be extended to a hot plasma representation. Using antenna design codes, some basic antenna shapes will be considered to see what types of antennas might be used to detect or launch modes that penetrate the cutoff layer in the edge plasma.

  16. Modeling the Spectra of Dense Hydrogen Plasmas: Beyond Occupation Probability

    NASA Astrophysics Data System (ADS)

    Gomez, T. A.; Montgomery, M. H.; Nagayama, T.; Kilcrease, D. P.; Winget, D. E.

    2017-03-01

    Accurately measuring the masses of white dwarf stars is crucial in many astrophysical contexts (e.g., asteroseismology and cosmochronology). These masses are most commonly determined by fitting a model atmosphere to an observed spectrum; this is known as the spectroscopic method. However, for cases in which more than one method may be employed, there are well known discrepancies between masses determined by the spectroscopic method and those determined by astrometric, dynamical, and/or gravitational-redshift methods. In an effort to resolve these discrepancies, we are developing a new model of hydrogen in a dense plasma that is a significant departure from previous models. Experiments at Sandia National Laboratories are currently underway to validate these new models, and we have begun modifications to incorporate these models into stellar-atmosphere codes.

  17. Error-correction coding

    NASA Technical Reports Server (NTRS)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  18. Pressure in electronically excited warm dense metals

    NASA Astrophysics Data System (ADS)

    Stegailov, Vladimir; Zhilyaev, Petr

    2015-06-01

    Non-equilibrium two-temperature warm dense metals consist of the ion subsystem that is subjected to structural transitions and involved in the mass transfer, and the electron subsystem that in various pulsed experiments absorbs energy and then evolves together with ions to equilibrium. Definition of pressure in such non-equilibrium systems causes certain controversy. In this work we make an attempt to clarify this definition that is vital for proper description of the whole relaxation process. Using the density functional theory we analyze on examples of Al and Au electronic pressure components in warm dense metals. Appealing to the Fermi gas model we elucidate a way to find a number of free delocalized electrons in warm dense metals. First results has been published in. This work is supported by the Russian Science Foundation grant No. 14-19-01487.

  19. Coalescence preference in densely packed microbubbles

    SciTech Connect

    Kim, Yeseul; Lim, Su Jin; Gim, Bopil; Weon, Byung Mook

    2015-01-13

    A bubble merged from two parent bubbles with different size tends to be placed closer to the larger parent. This phenomenon is known as the coalescence preference. Here we demonstrate that the coalescence preference can be blocked inside a densely packed cluster of bubbles. We utilized high-speed high-resolution X-ray microscopy to clearly visualize individual coalescence events inside densely packed microbubbles with a local packing fraction of ~40%. Thus, the surface energy release theory predicts an exponent of 5 in a relation between the relative coalescence position and the parent size ratio, whereas our observation for coalescence in densely packed microbubbles shows a different exponent of 2. We believe that this result would be important to understand the reality of coalescence dynamics in a variety of packing situations of soft matter.

  20. Coalescence preference in densely packed microbubbles

    DOE PAGES

    Kim, Yeseul; Lim, Su Jin; Gim, Bopil; ...

    2015-01-13

    A bubble merged from two parent bubbles with different size tends to be placed closer to the larger parent. This phenomenon is known as the coalescence preference. Here we demonstrate that the coalescence preference can be blocked inside a densely packed cluster of bubbles. We utilized high-speed high-resolution X-ray microscopy to clearly visualize individual coalescence events inside densely packed microbubbles with a local packing fraction of ~40%. Thus, the surface energy release theory predicts an exponent of 5 in a relation between the relative coalescence position and the parent size ratio, whereas our observation for coalescence in densely packed microbubblesmore » shows a different exponent of 2. We believe that this result would be important to understand the reality of coalescence dynamics in a variety of packing situations of soft matter.« less

  1. Dynamic structure factor in warm dense beryllium

    NASA Astrophysics Data System (ADS)

    Plagemann, K.-U.; Sperling, P.; Thiele, R.; Desjarlais, M. P.; Fortmann, C.; Döppner, T.; Lee, H. J.; Glenzer, S. H.; Redmer, R.

    2012-05-01

    We calculate the dynamic structure factor (DSF) in warm dense beryllium by means of ab initio molecular dynamics simulations. The dynamic conductivity is derived from the Kubo-Greenwood formula, and a Drude-like behaviour is observed. The corresponding dielectric function is used to determine the DSF. Since the ab initio approach is so far only applicable for wavenumbers k = 0, the k-dependence of the dielectric function is modelled via the Mermin ansatz. We present the results for the dielectric function and DSF of warm dense beryllium and compare these with perturbative treatments such as the Born-Mermin approximation. We found considerable differences between the results of these approaches; this underlines the need for a first-principles determination of the DSF of warm dense matter.

  2. Eddy Viscosity in Dense Granular Flows

    NASA Astrophysics Data System (ADS)

    Miller, T.; Rognon, P.; Metzger, B.; Einav, I.

    2013-08-01

    We present a seminal set of experiments on dense granular flows in the stadium shear geometry. The advantage of this geometry is that it produces steady shear flow over large deformations, in which the shear stress is constant. The striking result is that the velocity profiles exhibit an S shape, and are not linear as local constitutive laws would predict. We propose a model that suggests this is a result of wall perturbations which span through the system due to the nonlocal behavior of the material. The model is analogous to that of eddy viscosity in turbulent boundary layers, in which the distance to the wall is introduced to predict velocity profiles. Our findings appear pivotal in a number of experimental and practical situations involving dense granular flows next to a boundary. They could further be adapted to other similar materials such as dense suspensions, foams, or emulsions.

  3. MPQC: Performance Analysis and Optimization

    SciTech Connect

    Sarje, Abhinav; Williams, Samuel; Bailey, David

    2013-01-24

    MPQC (Massively Parallel Quantum Chemistry) is a widely used computational quantum chemistry code. It is capable of performing a number of computations commonly occurring in quantum chemistry. In order to achieve better performance of MPQC, in this report we present a detailed performance analysis of this code. We then perform loop and memory access optimizations, and measure performance improvements by comparing the performance of the optimized code with that of the original MPQC code. We observe that the optimized MPQC code achieves a significant improvement in the performance through a better utilization of vector processing and memory hierarchies.

  4. IR Spectroscopy of PAHs in Dense Clouds

    NASA Astrophysics Data System (ADS)

    Allamandola, Louis; Bernstein, Max; Mattioda, Andrew; Sandford, Scott

    2007-05-01

    Interstellar PAHs are likely to be a component of the ice mantles that form on dust grains in dense molecular clouds. PAHs frozen in grain mantles will produce IR absorption bands, not IR emission features. A couple of very weak absorption features in ground based spectra of a few objects embedded in dense clouds may be due to PAHs. Additionally spaceborne observations in the 5 to 8 ?m region, the region in which PAH spectroscopy is rich, reveal unidentified new bands and significant variation from object to object. It has not been possible to properly evaluate the contribution of PAH bands to these IR observations because the laboratory absorption spectra of PAHs condensed in realistic interstellar mixed-molecular ice analogs is lacking. This experimental data is necessary to interpret observations because, in ice mantles, the interaction of PAHs with the surrounding molecules effects PAH IR band positions, widths, profiles, and intrinsic strengths. Furthermore, PAHs are readily ionized in pure H2O ice, further altering the PAH spectrum. This laboratory proposal aims to remedy the situation by studying the IR spectroscopy of PAHs frozen in laboratory ice analogs that realistically reflect the composition of the interstellar ices observed in dense clouds. The purpose is to provide laboratory spectra which can be used to interpret IR observations. We will measure the spectra of these mixed molecular ices containing PAHs before and after ionization and determine the intrinsic band strengths of neutral and ionized PAHs in these ice analogs. This will enable a quantitative assessment of the role that PAHs can play in determining the 5-8 ?m spectrum of dense clouds and will directly address the following two fundamental questions associated with dense cloud spectroscopy and chemistry: 1- Can PAHs be detected in dense clouds? 2- Are PAH ions components of interstellar ice?

  5. Superfluid vortices in dense quark matter

    NASA Astrophysics Data System (ADS)

    Mallavarapu, S. Kumar; Alford, Mark; Windisch, Andreas; Vachaspati, Tanmay

    2016-03-01

    Superfluid vortices in the color-flavor-locked (CFL) phase of dense quark matter are known to be energetically disfavored as compared to well-separated triplets of ``semi-superfluid'' color flux tubes. In this talk we will provide results which will identify regions in parameter space where the superfluid vortex spontaneously decays. We will also discuss the nature of the mode that is responsible for the decay of a superfluid vortex in dense quark matter. We will conclude by mentioning the implications of our results to neutron stars.

  6. Fast temperature relaxation model in dense plasmas

    NASA Astrophysics Data System (ADS)

    Faussurier, Gérald; Blancard, Christophe

    2017-01-01

    We present a fast model to calculate the temperature-relaxation rates in dense plasmas. The electron-ion interaction-potential is calculated by combining a Yukawa approach and a finite-temperature Thomas-Fermi model. We include the internal energy as well as the excess energy of ions using the QEOS model. Comparisons with molecular dynamics simulations and calculations based on an average-atom model are presented. This approach allows the study of the temperature relaxation in a two-temperature electron-ion system in warm and hot dense matter.

  7. Demagnetization effects in dense nanoparticle assemblies

    NASA Astrophysics Data System (ADS)

    Normile, P. S.; Andersson, M. S.; Mathieu, R.; Lee, S. S.; Singh, G.; De Toro, J. A.

    2016-10-01

    We highlight the relevance of demagnetizing-field corrections in the characterization of dense magnetic nanoparticle assemblies. By an analysis that employs in-plane and out-of-plane magnetometry on cylindrical assemblies, we demonstrate the suitability of a simple analytical formula-based correction method. This allows us to identify artifacts of the demagnetizing field in temperature-dependent susceptibility curves (e.g., shoulder peaks in curves from a disordered assembly of essentially bare magnetic nanoparticles). The same analysis approach is shown to be a straightforward procedure for determining the magnetic nanoparticle packing fraction in dense, disordered assemblies.

  8. ION BEAM HEATED TARGET SIMULATIONS FOR WARM DENSE MATTER PHYSICS AND INERTIAL FUSION ENERGY

    SciTech Connect

    Barnard, J.J.; Armijo, J.; Bailey, D.S.; Friedman, A.; Bieniosek, F.M.; Henestroza, E.; Kaganovich, I.; Leung, P.T.; Logan, B.G.; Marinak, M.M.; More, R.M.; Ng, S.F.; Penn, G.E.; Perkins, L.J.; Veitzer, S.; Wurtele, J.S.; Yu, S.S.; Zylstra, A.B.

    2008-08-01

    Hydrodynamic simulations have been carried out using the multi-physics radiation hydrodynamics code HYDRA and the simplified one-dimensional hydrodynamics code DISH. We simulate possible targets for a near-term experiment at LBNL (the Neutralized Drift Compression Experiment, NDCX) and possible later experiments on a proposed facility (NDCX-II) for studies of warm dense matter and inertial fusion energy related beam-target coupling. Simulations of various target materials (including solids and foams) are presented. Experimental configurations include single pulse planar metallic solid and foam foils. Concepts for double-pulsed and ramped-energy pulses on cryogenic targets and foams have been simulated for exploring direct drive beam target coupling, and concepts and simulations for collapsing cylindrical and spherical bubbles to enhance temperature and pressure for warm dense matter studies are described.

  9. Ion Beam Heated Target Simulations for Warm Dense Matter Physics and Inertial Fusion Energy

    SciTech Connect

    Barnard, J J; Armijo, J; Bailey, D S; Friedman, A; Bieniosek, F M; Henestroza, E; Kaganovich, I; Leung, P T; Logan, B G; Marinak, M M; More, R M; Ng, S F; Penn, G E; Perkins, L J; Veitzer, S; Wurtele, J S; Yu, S S; Zylstra, A B

    2008-08-12

    Hydrodynamic simulations have been carried out using the multi-physics radiation hydrodynamics code HYDRA and the simplified one-dimensional hydrodynamics code DISH. We simulate possible targets for a near-term experiment at LBNL (the Neutralized Drift Compression Experiment, NDCX) and possible later experiments on a proposed facility (NDCX-II) for studies of warm dense matter and inertial fusion energy related beam-target coupling. Simulations of various target materials (including solids and foams) are presented. Experimental configurations include single pulse planar metallic solid and foam foils. Concepts for double-pulsed and ramped-energy pulses on cryogenic targets and foams have been simulated for exploring direct drive beam target coupling, and concepts and simulations for collapsing cylindrical and spherical bubbles to enhance temperature and pressure for warm dense matter studies are described.

  10. MACRAD: A mass analysis code for radiators

    SciTech Connect

    Gallup, D.R.

    1988-01-01

    A computer code to estimate and optimize the mass of heat pipe radiators (MACRAD) is currently under development. A parametric approach is used in MACRAD, which allows the user to optimize radiator mass based on heat pipe length, length to diameter ratio, vapor to wick radius, radiator redundancy, etc. Full consideration of the heat pipe operating parameters, material properties, and shielding requirements is included in the code. Preliminary results obtained with MACRAD are discussed.

  11. Applications of Coding in Network Communications

    ERIC Educational Resources Information Center

    Chang, Christopher SungWook

    2012-01-01

    This thesis uses the tool of network coding to investigate fast peer-to-peer file distribution, anonymous communication, robust network construction under uncertainty, and prioritized transmission. In a peer-to-peer file distribution system, we use a linear optimization approach to show that the network coding framework significantly simplifies…

  12. Applications of Coding in Network Communications

    ERIC Educational Resources Information Center

    Chang, Christopher SungWook

    2012-01-01

    This thesis uses the tool of network coding to investigate fast peer-to-peer file distribution, anonymous communication, robust network construction under uncertainty, and prioritized transmission. In a peer-to-peer file distribution system, we use a linear optimization approach to show that the network coding framework significantly simplifies…

  13. A parallel solver for huge dense linear systems

    NASA Astrophysics Data System (ADS)

    Badia, J. M.; Movilla, J. L.; Climente, J. I.; Castillo, M.; Marqués, M.; Mayo, R.; Quintana-Ortí, E. S.; Planelles, J.

    2011-11-01

    HDSS (Huge Dense Linear System Solver) is a Fortran Application Programming Interface (API) to facilitate the parallel solution of very large dense systems to scientists and engineers. The API makes use of parallelism to yield an efficient solution of the systems on a wide range of parallel platforms, from clusters of processors to massively parallel multiprocessors. It exploits out-of-core strategies to leverage the secondary memory in order to solve huge linear systems O(100.000). The API is based on the parallel linear algebra library PLAPACK, and on its Out-Of-Core (OOC) extension POOCLAPACK. Both PLAPACK and POOCLAPACK use the Message Passing Interface (MPI) as the communication layer and BLAS to perform the local matrix operations. The API provides a friendly interface to the users, hiding almost all the technical aspects related to the parallel execution of the code and the use of the secondary memory to solve the systems. In particular, the API can automatically select the best way to store and solve the systems, depending of the dimension of the system, the number of processes and the main memory of the platform. Experimental results on several parallel platforms report high performance, reaching more than 1 TFLOP with 64 cores to solve a system with more than 200 000 equations and more than 10 000 right-hand side vectors. New version program summaryProgram title: Huge Dense System Solver (HDSS) Catalogue identifier: AEHU_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHU_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 87 062 No. of bytes in distributed program, including test data, etc.: 1 069 110 Distribution format: tar.gz Programming language: Fortran90, C Computer: Parallel architectures: multiprocessors, computer clusters Operating system

  14. DENSE NONAQUEOUS PHASE LIQUIDS -- A WORKSHOP SUMMARY

    EPA Science Inventory

    site characterization, and, therefore, DNAPL remediation, can be expected. Dense nonaqueous phase liquids (DNAPLs) in the subsurface are long-term sources of ground-water contamination, and may persist for centuries before dissolving completely in adjacent ground water. In respo...

  15. Coalescence preference in dense packing of bubbles

    NASA Astrophysics Data System (ADS)

    Kim, Yeseul; Gim, Bopil; Gim, Bopil; Weon, Byung Mook

    2015-11-01

    Coalescence preference is the tendency that a merged bubble from the contact of two original bubbles (parent) tends to be near to the bigger parent. Here, we show that the coalescence preference can be blocked by densely packing of neighbor bubbles. We use high-speed high-resolution X-ray microscopy to clearly visualize individual coalescence phenomenon which occurs in micro scale seconds and inside dense packing of microbubbles with a local packing fraction of ~40%. Previous theory and experimental evidence predict a power of -5 between the relative coalescence position and the parent size. However, our new observation for coalescence preference in densely packed microbubbles shows a different power of -2. We believe that this result may be important to understand coalescence dynamics in dense packing of soft matter. This work (NRF-2013R1A22A04008115) was supported by Mid-career Researcher Program through NRF grant funded by the MEST and also was supported by Ministry of Science, ICT and Future Planning (2009-0082580) and by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry and Education, Science and Technology (NRF-2012R1A6A3A04039257).

  16. Dense peripheral corneal clouding in Scheie syndrome.

    PubMed

    Summers, C G; Whitley, C B; Holland, E J; Purple, R L; Krivit, W

    1994-05-01

    A 28-year-old woman with Scheie syndrome (MPS I-S) presented with the unusual feature of extremely dense peripheral corneal clouding, allowing maintenance of good central visual acuity. Characteristic systemic features, an abnormal electroretinogram result, and absent alpha-L-iduronidase activity confirmed the diagnosis despite the unusual corneal pattern of clouding.

  17. Preparation of a dense, polycrystalline ceramic structure

    SciTech Connect

    Cooley, Jason; Chen, Ching-Fong; Alexander, David

    2010-12-07

    Ceramic nanopowder was sealed inside a metal container under a vacuum. The sealed evacuated container was forced through a severe deformation channel at an elevated temperature below the melting point of the ceramic nanopowder. The result was a dense nanocrystalline ceramic structure inside the metal container.

  18. DNS of turbulent flows of dense gases

    NASA Astrophysics Data System (ADS)

    Sciacovelli, L.; Cinnella, P.; Gloerfelt, X.; Grasso, F.

    2017-03-01

    The influence of dense gas effects on compressible turbulence is investigated by means of numerical simulations of the decay of compressible homogeneous isotropic turbulence (CHIT) and of supersonic turbulent flows through a plane channel (TCF). For both configurations, a parametric study on the Mach and Reynolds numbers is carried out. The dense gas considered in these parametric studies is PP11, a heavy fluorocarbon. The results are systematically compared to those obtained for a diatomic perfect gas (air). In our computations, the thermodynamic behaviour of the dense gases is modelled by means of the Martin-Hou equation of state. For CHIT cases, initial turbulent Mach numbers up to 1 are analyzed using mesh resolutions up to 5123. For TCF, bulk Mach numbers up to 3 and bulk Reynolds numbers up to 12000 are investigated. Average profiles of the thermodynamic quantities exhibit significant differences with respect to perfect-gas solutions for both of the configurations. For high-Mach CHIT, compressible structures are modified with respect to air, with weaker eddy shocklets and stronger expansions. In TCF, the velocity profiles of dense gas flows are much less sensitive to the Mach number and collapse reasonably well in the logarithmic region without any special need for compressible scalings, unlike the case of air, and the overall flow behaviour is midway between that of a variable-property liquid and that of a gas.

  19. DENSE NONAQUEOUS PHASE LIQUIDS -- A WORKSHOP SUMMARY

    EPA Science Inventory

    site characterization, and, therefore, DNAPL remediation, can be expected. Dense nonaqueous phase liquids (DNAPLs) in the subsurface are long-term sources of ground-water contamination, and may persist for centuries before dissolving completely in adjacent ground water. In respo...

  20. Burning Of Dense Clusters Of Fuel Drops

    NASA Technical Reports Server (NTRS)

    Bellan, Josette; Harstad, Kenneth G.

    1992-01-01

    Report presents theoretical study of evaporation, ignition, and combustion of rich and relatively dense clusters of drops of liquid fuel. Focus on interactions between heterogenous liquid/gas mixture in cluster and flame surrounding it. Theoretical model of evaporation, ignition, and combustion presented.

  1. Flexure modelling at seamounts with dense cores

    NASA Astrophysics Data System (ADS)

    Kim, Seung-Sep; Wessel, Paul

    2010-08-01

    The lithospheric response to seamounts and ocean islands has been successfully described by deformation of an elastic plate induced by a given volcanic load. If the shape and mass of a seamount are known, the lithospheric flexure due to the seamount is determined by the thickness of an elastic plate, Te, which depends on the load density and the age of the plate at the time of seamount construction. We can thus infer important thermomechanical properties of the lithosphere from Te estimates at seamounts and their correlation with other geophysical inferences, such as cooling of the plate. Whereas the bathymetry (i.e. shape) of a seamount is directly observable, the total mass often requires an assumption of the internal seamount structure. The conventional approach considers the seamount to have a uniform density (e.g. density of the crust). This choice, however, tends to bias the total mass acting on an elastic plate. In this study, we will explore a simple approximation to the seamount's internal structure that considers a dense core and a less dense outer edifice. Although the existence of a core is supported by various gravity and seismic studies, the role of such volcanic cores in flexure modelling has not been fully addressed. Here, we present new analytic solutions for plate flexure due to axisymmetric dense core loads, and use them to examine the effects of dense cores in flexure calculations for a variety of synthetic cases. Comparing analytic solutions with and without a core indicates that the flexure model with uniform density underestimates Te by at least 25 per cent. This bias increases when the uniform density is taken to be equal to the crustal density. We also propose a practical application of the dense core model by constructing a uniform density load of same mass as the dense core load. This approximation allows us to compute the flexural deflection and gravity anomaly of a seamount in the wavenumber domain and minimize the limitations

  2. Two perspectives on the origin of the standard genetic code.

    PubMed

    Sengupta, Supratim; Aggarwal, Neha; Bandhu, Ashutosh Vishwa

    2014-12-01

    The origin of a genetic code made it possible to create ordered sequences of amino acids. In this article we provide two perspectives on code origin by carrying out simulations of code-sequence coevolution in finite populations with the aim of examining how the standard genetic code may have evolved from more primitive code(s) encoding a small number of amino acids. We determine the efficacy of the physico-chemical hypothesis of code origin in the absence and presence of horizontal gene transfer (HGT) by allowing a diverse collection of code-sequence sets to compete with each other. We find that in the absence of horizontal gene transfer, natural selection between competing codes distinguished by differences in the degree of physico-chemical optimization is unable to explain the structure of the standard genetic code. However, for certain probabilities of the horizontal transfer events, a universal code emerges having a structure that is consistent with the standard genetic code.

  3. Superfluidity and vortices in dense quark matter

    NASA Astrophysics Data System (ADS)

    Mallavarapu, Satyanarayana Kumar

    This dissertation will elucidate specific features of superfluid behavior in dense quark matter, It will start with issues regarding spontaneous decay of superfluid vortices in dense quark matter. This will be followed by topics that explain superfluid phenomena from field theoretical viewpoint. In particular the first part of the dissertation will talk about superfluid vortices in the color-flavor-locked (CFL) phase of dense quark matter which are known to be energetically disfavored as compared to well-separated triplets of "semi-superfluid" color flux tubes. In this talk we will provide results which will identify regions in parameter space where the superfluid vortex spontaneously decays. We will also discuss the nature of the mode that is responsible for the decay of a superfluid vortex in dense quark matter. We will conclude by mentioning the implications of our results to neutron stars. In the field theoretic formulation of a zero-temperature superfluid one connects the superfluid four-velocity which is a macroscopic observable with a microscopic field variable namely the gradient of the phase of a Bose-Condensed scalar field. On the other hand, a superfluid at nonzero temperatures is usually described in terms of a two-fluid model: the superfluid and the normal fluid. In the later part of the dissertation we offer a deeper understanding of the two-fluid model by deriving it from an underlying microscopic field theory. In particular, we shall obtain the macroscopic properties of a uniform, dissipationless superfluid at low temperatures and weak coupling within the framework of a ϕ 4 model. Though our study is very general, it may also be viewed as a step towards understanding the superfluid properties of various phases of dense nuclear and quark matter in the interior of compact star.

  4. Chemical Dense Gas Modeling in Cities

    NASA Astrophysics Data System (ADS)

    Brown, M. J.; Williams, M. D.; Nelson, M. A.; Streit, G. E.

    2007-12-01

    Many industrial facilities have on-site storage of chemicals and are within a few kilometers of residential population. Chemicals are transported around the country via trains and trucks and often go through populated areas on their journey. Many of the chemicals, like chlorine and phosgene, are toxic and when released into the air are heavier-than-air dense gases that hug the ground and result in high airborne concentrations at breathing level. There is considerable concern about the vulnerability of these stored and transported chemicals to terrorist attack and the impact a release could have on highly-populated urban areas. There is the possibility that the impacts of a dense gas release within a city would be exacerbated since the buildings might act to trap the toxic cloud at street level and channel it over a large area down side streets. However, no one is quite sure what will happen for a release in cities since there is a dearth of experimental data. There are a number of fast-running dense gas models used in the air pollution and emergency response community, but there are none that account for the complex flow fields and turbulence generated by buildings. As part of this presentation, we will discuss current knowledge regarding dense gas releases around buildings and other obstacles. We will present information from wind tunnel and field experiments, as well as computational fluid dynamics modeling. We will also discuss new fast response modeling efforts which are trying to account for dense gas transport and dispersion in cities.

  5. Monte Carlo simulations of ionization potential depression in dense plasmas

    SciTech Connect

    Stransky, M.

    2016-01-15

    A particle-particle grand canonical Monte Carlo model with Coulomb pair potential interaction was used to simulate modification of ionization potentials by electrostatic microfields. The Barnes-Hut tree algorithm [J. Barnes and P. Hut, Nature 324, 446 (1986)] was used to speed up calculations of electric potential. Atomic levels were approximated to be independent of the microfields as was assumed in the original paper by Ecker and Kröll [Phys. Fluids 6, 62 (1963)]; however, the available levels were limited by the corresponding mean inter-particle distance. The code was tested on hydrogen and dense aluminum plasmas. The amount of depression was up to 50% higher in the Debye-Hückel regime for hydrogen plasmas, in the high density limit, reasonable agreement was found with the Ecker-Kröll model for hydrogen plasmas and with the Stewart-Pyatt model [J. Stewart and K. Pyatt, Jr., Astrophys. J. 144, 1203 (1966)] for aluminum plasmas. Our 3D code is an improvement over the spherically symmetric simplifications of the Ecker-Kröll and Stewart-Pyatt models and is also not limited to high atomic numbers as is the underlying Thomas-Fermi model used in the Stewart-Pyatt model.

  6. Monte Carlo simulations of ionization potential depression in dense plasmas

    NASA Astrophysics Data System (ADS)

    Stransky, M.

    2016-01-01

    A particle-particle grand canonical Monte Carlo model with Coulomb pair potential interaction was used to simulate modification of ionization potentials by electrostatic microfields. The Barnes-Hut tree algorithm [J. Barnes and P. Hut, Nature 324, 446 (1986)] was used to speed up calculations of electric potential. Atomic levels were approximated to be independent of the microfields as was assumed in the original paper by Ecker and Kröll [Phys. Fluids 6, 62 (1963)]; however, the available levels were limited by the corresponding mean inter-particle distance. The code was tested on hydrogen and dense aluminum plasmas. The amount of depression was up to 50% higher in the Debye-Hückel regime for hydrogen plasmas, in the high density limit, reasonable agreement was found with the Ecker-Kröll model for hydrogen plasmas and with the Stewart-Pyatt model [J. Stewart and K. Pyatt, Jr., Astrophys. J. 144, 1203 (1966)] for aluminum plasmas. Our 3D code is an improvement over the spherically symmetric simplifications of the Ecker-Kröll and Stewart-Pyatt models and is also not limited to high atomic numbers as is the underlying Thomas-Fermi model used in the Stewart-Pyatt model.

  7. a Novel Removal Method for Dense Stripes in Remote Sensing Images

    NASA Astrophysics Data System (ADS)

    Liu, Xinxin; Shen, Huanfeng; Yuan, Qiangqiang; Zhang, Liangpei; Cheng, Qing

    2016-06-01

    In remote sensing images, the common existing stripe noise always severely affects the imaging quality and limits the related subsequent application, especially when it is with high density. To well process the dense striped data and ensure a reliable solution, we construct a statistical property based constraint in our proposed model and use it to control the whole destriping process. The alternating direction method of multipliers (ADMM) is applied in this work to solve and accelerate the model optimization. Experimental results on real data with different kinds of dense stripe noise demonstrate the effectiveness of the proposed method in terms of both qualitative and quantitative perspectives.

  8. Dense image registration through MRFs and efficient linear programming.

    PubMed

    Glocker, Ben; Komodakis, Nikos; Tziritas, Georgios; Navab, Nassir; Paragios, Nikos

    2008-12-01

    In this paper, we introduce a novel and efficient approach to dense image registration, which does not require a derivative of the employed cost function. In such a context, the registration problem is formulated using a discrete Markov random field objective function. First, towards dimensionality reduction on the variables we assume that the dense deformation field can be expressed using a small number of control points (registration grid) and an interpolation strategy. Then, the registration cost is expressed using a discrete sum over image costs (using an arbitrary similarity measure) projected on the control points, and a smoothness term that penalizes local deviations on the deformation field according to a neighborhood system on the grid. Towards a discrete approach, the search space is quantized resulting in a fully discrete model. In order to account for large deformations and produce results on a high resolution level, a multi-scale incremental approach is considered where the optimal solution is iteratively updated. This is done through successive morphings of the source towards the target image. Efficient linear programming using the primal dual principles is considered to recover the lowest potential of the cost function. Very promising results using synthetic data with known deformations and real data demonstrate the potentials of our approach.

  9. A novel double patterning approach for 30nm dense holes

    NASA Astrophysics Data System (ADS)

    Hsu, Dennis Shu-Hao; Wang, Walter; Hsieh, Wei-Hsien; Huang, Chun-Yen; Wu, Wen-Bin; Shih, Chiang-Lin; Shih, Steven

    2011-04-01

    Double Patterning Technology (DPT) was commonly accepted as the major workhorse beyond water immersion lithography for sub-38nm half-pitch line patterning before the EUV production. For dense hole patterning, classical DPT employs self-aligned spacer deposition and uses the intersection of horizontal and vertical lines to define the desired hole patterns. However, the increase in manufacturing cost and process complexity is tremendous. Several innovative approaches have been proposed and experimented to address the manufacturing and technical challenges. A novel process of double patterned pillars combined image reverse will be proposed for the realization of low cost dense holes in 30nm node DRAM. The nature of pillar formation lithography provides much better optical contrast compared to the counterpart hole patterning with similar CD requirements. By the utilization of a reliable freezing process, double patterned pillars can be readily implemented. A novel image reverse process at the last stage defines the hole patterns with high fidelity. In this paper, several freezing processes for the construction of the double patterned pillars were tested and compared, and 30nm double patterning pillars were demonstrated successfully. A variety of different image reverse processes will be investigated and discussed for their pros and cons. An economic approach with the optimized lithography performance will be proposed for the application of 30nm DRAM node.

  10. Understanding neutron production in the deuterium dense plasma focus

    SciTech Connect

    Appelbe, Brian E-mail: j.chittenden@imperial.ac.uk; Chittenden, Jeremy E-mail: j.chittenden@imperial.ac.uk

    2014-12-15

    The deuterium Dense Plasma Focus (DPF) can produce copious amounts of MeV neutrons and can be used as an efficient neutron source. However, the mechanism by which neutrons are produced within the DPF is poorly understood and this limits our ability to optimize the device. In this paper we present results from a computational study aimed at understanding how neutron production occurs in DPFs with a current between 70 kA and 500 kA and which parameters can affect it. A combination of MHD and kinetic tools are used to model the different stages of the DPF implosion. It is shown that the anode shape can significantly affect the structure of the imploding plasma and that instabilities in the implosion lead to the generation of large electric fields at stagnation. These electric fields can accelerate deuterium ions within the stagnating plasma to large (>100 keV) energies leading to reactions with ions in the cold dense plasma. It is shown that the electromagnetic fields present can significantly affect the trajectories of the accelerated ions and the resulting neutron production.

  11. Texture-Aware Dense Image Matching Using Ternary Census Transform

    NASA Astrophysics Data System (ADS)

    Hu, Han; Chen, Chongtai; Wu, Bo; Yang, Xiaoxia; Zhu, Qing; Ding, Yulin

    2016-06-01

    Textureless and geometric discontinuities are major problems in state-of-the-art dense image matching methods, as they can cause visually significant noise and the loss of sharp features. Binary census transform is one of the best matching cost methods but in textureless areas, where the intensity values are similar, it suffers from small random noises. Global optimization for disparity computation is inherently sensitive to parameter tuning in complex urban scenes, and must compromise between smoothness and discontinuities. The aim of this study is to provide a method to overcome these issues in dense image matching, by extending the industry proven Semi-Global Matching through 1) developing a ternary census transform, which takes three outputs in a single order comparison and encodes the results in two bits rather than one, and also 2) by using texture-information to self-tune the parameters, which both preserves sharp edges and enforces smoothness when necessary. Experimental results using various datasets from different platforms have shown that the visual qualities of the triangulated point clouds in urban areas can be largely improved by these proposed methods.

  12. Efficiently dense hierarchical graphene based aerogel electrode for supercapacitors

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Lu, Chengxing; Peng, Huifen; Zhang, Xin; Wang, Zhenkun; Wang, Gongkai

    2016-08-01

    Boosting gravimetric and volumetric capacitances simultaneously at a high rate is still a discrepancy in development of graphene based supercapacitors. We report the preparation of dense hierarchical graphene/activated carbon composite aerogels via a reduction induced self-assembly process coupled with a drying post treatment. The compact and porous structures of composite aerogels could be maintained. The drying post treatment has significant effects on increasing the packing density of aerogels. The introduced activated carbons play the key roles of spacers and bridges, mitigating the restacking of adjacent graphene nanosheets and connecting lateral and vertical graphene nanosheets, respectively. The optimized aerogel with a packing density of 0.67 g cm-3 could deliver maximum gravimetric and volumetric capacitances of 128.2 F g-1 and 85.9 F cm-3, respectively, at a current density of 1 A g-1 in aqueous electrolyte, showing no apparent degradation to the specific capacitance at a current density of 10 A g-1 after 20000 cycles. The corresponding gravimetric and volumetric capacitances of 116.6 F g-1 and 78.1 cm-3 with an acceptable cyclic stability are also achieved in ionic liquid electrolyte. The results show a feasible strategy of designing dense hierarchical graphene based aerogels for supercapacitors.

  13. Model Children's Code.

    ERIC Educational Resources Information Center

    New Mexico Univ., Albuquerque. American Indian Law Center.

    The Model Children's Code was developed to provide a legally correct model code that American Indian tribes can use to enact children's codes that fulfill their legal, cultural and economic needs. Code sections cover the court system, jurisdiction, juvenile offender procedures, minor-in-need-of-care, and termination. Almost every Code section is…

  14. Coding of Neuroinfectious Diseases.

    PubMed

    Barkley, Gregory L

    2015-12-01

    Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.

  15. Diagnostic Coding for Epilepsy.

    PubMed

    Williams, Korwyn; Nuwer, Marc R; Buchhalter, Jeffrey R

    2016-02-01

    Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.

  16. Phylogeny of genetic codes and punctuation codes within genetic codes.

    PubMed

    Seligmann, Hervé

    2015-03-01

    Punctuation codons (starts, stops) delimit genes, reflect translation apparatus properties. Most codon reassignments involve punctuation. Here two complementary approaches classify natural genetic codes: (A) properties of amino acids assigned to codons (classical phylogeny), coding stops as X (A1, antitermination/suppressor tRNAs insert unknown residues), or as gaps (A2, no translation, classical stop); and (B) considering only punctuation status (start, stop and other codons coded as -1, 0 and 1 (B1); 0, -1 and 1 (B2, reflects ribosomal translational dynamics); and 1, -1, and 0 (B3, starts/stops as opposites)). All methods separate most mitochondrial codes from most nuclear codes; Gracilibacteria consistently cluster with metazoan mitochondria; mitochondria co-hosted with chloroplasts cluster with nuclear codes. Method A1 clusters the euplotid nuclear code with metazoan mitochondria; A2 separates euplotids from mitochondria. Firmicute bacteria Mycoplasma/Spiroplasma and Protozoan (and lower metazoan) mitochondria share codon-amino acid assignments. A1 clusters them with mitochondria, they cluster with the standard genetic code under A2: constraints on amino acid ambiguity versus punctuation-signaling produced the mitochondrial versus bacterial versions of this genetic code. Punctuation analysis B2 converges best with classical phylogenetic analyses, stressing the need for a unified theory of genetic code punctuation accounting for ribosomal constraints.

  17. To Code or Not To Code?

    ERIC Educational Resources Information Center

    Parkinson, Brian; Sandhu, Parveen; Lacorte, Manel; Gourlay, Lesley

    1998-01-01

    This article considers arguments for and against the use of coding systems in classroom-based language research and touches on some relevant considerations from ethnographic and conversational analysis approaches. The four authors each explain and elaborate on their practical decision to code or not to code events or utterances at a specific point…

  18. Sparse coding based feature representation method for remote sensing images

    NASA Astrophysics Data System (ADS)

    Oguslu, Ender

    In this dissertation, we study sparse coding based feature representation method for the classification of multispectral and hyperspectral images (HSI). The existing feature representation systems based on the sparse signal model are computationally expensive, requiring to solve a convex optimization problem to learn a dictionary. A sparse coding feature representation framework for the classification of HSI is presented that alleviates the complexity of sparse coding through sub-band construction, dictionary learning, and encoding steps. In the framework, we construct the dictionary based upon the extracted sub-bands from the spectral representation of a pixel. In the encoding step, we utilize a soft threshold function to obtain sparse feature representations for HSI. Experimental results showed that a randomly selected dictionary could be as effective as a dictionary learned from optimization. The new representation usually has a very high dimensionality requiring a lot of computational resources. In addition, the spatial information of the HSI data has not been included in the representation. Thus, we modify the framework by incorporating the spatial information of the HSI pixels and reducing the dimension of the new sparse representations. The enhanced model, called sparse coding based dense feature representation (SC-DFR), is integrated with a linear support vector machine (SVM) and a composite kernels SVM (CKSVM) classifiers to discriminate different types of land cover. We evaluated the proposed algorithm on three well known HSI datasets and compared our method to four recently developed classification methods: SVM, CKSVM, simultaneous orthogonal matching pursuit (SOMP) and image fusion and recursive filtering (IFRF). The results from the experiments showed that the proposed method can achieve better overall and average classification accuracies with a much more compact representation leading to more efficient sparse models for HSI classification. To further

  19. IR Spectroscopy of PANHs in Dense Clouds

    NASA Astrophysics Data System (ADS)

    Allamandola, Louis; Mattioda, Andrew; Sandford, Scott

    2008-03-01

    Interstellar PAHs are likely to be frozen into ice mantles on dust grains in dense clouds. These PAHs will produce IR absorption bands, not emission features. A couple of very weak absorption features in ground based spectra of a few objects in dense clouds may be due to PAHs. It is now thought that aromatic molecules in which N atoms are substituted for a few of the C atoms in a PAH's hexagonal skeletal network (PANHs) may well be as abundant and ubiquitous throughout the interstellar medium as PAHs. Spaceborne observations in the 5 to 8 um region, the region in which PAH spectroscopy is rich, reveal unidentified new bands and significant variation from object to object. It is not possible to analyze these observations because lab spectra of PANHs and PAHs condensed in realistic interstellar ice analogs are lacking. This lab data is necessary to interpret observations because, in ice mantles, the surrounding molecules affect PANH and PAH IR band positions, widths, profiles, and intrinsic strengths. Further, PAHs (and PANHs?) are readily ionized in pure H2O ice, further altering the spectrum. This proposal starts to address this situation by studying the IR spectra of PANHs frozen in laboratory ice analogs that reflect the composition of the interstellar ices observed in dense clouds. Thanks to Spitzer Cycle-4 support, we are now measuring the spectra of PAHs in interstellar ice analogs to provide laboratory spectra that can be used to interpret IR observations. Here we propose to extend this work to PANHs. We will measure the spectra of these interstellar ice analogs containing PANHs before and after ionization and determine the band strengths of neutral and ionized PANHs in these ices. This will enable a quantitative assessment of the role that PANHs can play in the 5-8 um spectrum of dense clouds and address the following two fundamental questions associated with dense cloud spectroscopy and chemistry: 1- Can PANHs be detected in dense clouds? 2- Are PANH ions

  20. Fully kinetic simulations of megajoule-scale dense plasma focus

    SciTech Connect

    Schmidt, A.; Link, A.; Tang, V.; Halvorson, C.; May, M.; Welch, D.; Meehan, B. T.; Hagen, E. C.

    2014-10-15

    Dense plasma focus (DPF) Z-pinch devices are sources of copious high energy electrons and ions, x-rays, and neutrons. Megajoule-scale DPFs can generate 10{sup 12} neutrons per pulse in deuterium gas through a combination of thermonuclear and beam-target fusion. However, the details of the neutron production are not fully understood and past optimization efforts of these devices have been largely empirical. Previously, we reported on the first fully kinetic simulations of a kilojoule-scale DPF and demonstrated that both kinetic ions and kinetic electrons are needed to reproduce experimentally observed features, such as charged-particle beam formation and anomalous resistivity. Here, we present the first fully kinetic simulation of a MegaJoule DPF, with predicted ion and neutron spectra, neutron anisotropy, neutron spot size, and time history of neutron production. The total yield predicted by the simulation is in agreement with measured values, validating the kinetic model in a second energy regime.

  1. Upgrades to NRLMOL code

    NASA Astrophysics Data System (ADS)

    Basurto, Luis

    This project consists of performing upgrades to the massively parallel NRLMOL electronic structure code in order to enhance its performance by increasing its flexibility by: a) Utilizing dynamically allocated arrays, b) Executing in a parallel environment sections of the program that were previously executed in a serial mode, c) Exploring simultaneous concurrent executions of the program through the use of an already existing MPI environment; thus enabling the simulation of larger systems than it is currently capable of performing. Also developed was a graphical user interface that will allow less experienced users to start performing electronic structure calculations by aiding them in performing the necessary configuration of input files as well as providing graphical tools for the displaying and analysis of results. Additionally, a computational toolkit that can avail of large supercomputers and make use of various levels of approximation for atomic interactions was developed to search for stable atomic clusters and predict novel stable endohedral fullerenes. As an application of the developed computational toolkit, a search was conducted for stable isomers of Sc3N C80 fullerene. In this search, about 1.2 million isomers of C80 were optimized in various charged states at the PM6 level. Subsequently, using the selected optimized isomers of C80 in various charged state, about 10,000 isomers of Sc3N C80 were constructed which were optimized using semi-empirical PM6 quantum chemical method. A few selected lowest isomers of Sc3N C80 were optimized at the DFT level. The calculation confirms the lowest 3 isomers previously reported in literature but 4 new isomers are found within the lowest 10 isomers. Using the upgraded NRLMOL code, a study was done of the electronic structure of a multichromoric molecular complex containing two of each borondipyrromethane dye, Zn-tetraphenyl-porphyrin, bisphenyl anthracene and a fullerene. A systematic examination of the effect of

  2. WARM EXTENDED DENSE GAS AT THE HEART OF A COLD COLLAPSING DENSE CORE

    SciTech Connect

    Shinnaga, Hiroko; Phillips, Thomas G.; Furuya, Ray S.; Kitamura, Yoshimi E-mail: tgp@submm.caltech.ed E-mail: kitamura@isas.jaxa.j

    2009-12-01

    In order to investigate when and how the birth of a protostellar core occurs, we made survey observations of four well-studied dense cores in the Taurus molecular cloud using CO transitions in submillimeter bands. We report here the detection of unexpectedly warm (approx30-70 K), extended (radius of approx2400 AU), dense (a few times 10{sup 5} cm{sup -3}) gas at the heart of one of the dense cores, L1521F (MC27), within the cold dynamically collapsing components. We argue that the detected warm, extended, dense gas may originate from shock regions caused by collisions between the dynamically collapsing components and outflowing/rotating components within the dense core. We propose a new stage of star formation, 'warm-in-cold core stage (WICCS)', i.e., the cold collapsing envelope encases the warm extended dense gas at the center due to the formation of a protostellar core. WICCS would constitute a missing link in evolution between a cold quiescent starless core and a young protostar in class 0 stage that has a large-scale bipolar outflow.

  3. On Coding Non-Contiguous Letter Combinations

    PubMed Central

    Dandurand, Frédéric; Grainger, Jonathan; Duñabeitia, Jon Andoni; Granier, Jean-Pierre

    2011-01-01

    Starting from the hypothesis that printed word identification initially involves the parallel mapping of visual features onto location-specific letter identities, we analyze the type of information that would be involved in optimally mapping this location-specific orthographic code onto a location-invariant lexical code. We assume that some intermediate level of coding exists between individual letters and whole words, and that this involves the representation of letter combinations. We then investigate the nature of this intermediate level of coding given the constraints of optimality. This intermediate level of coding is expected to compress data while retaining as much information as possible about word identity. Information conveyed by letters is a function of how much they constrain word identity and how visible they are. Optimization of this coding is a combination of minimizing resources (using the most compact representations) and maximizing information. We show that in a large proportion of cases, non-contiguous letter sequences contain more information than contiguous sequences, while at the same time requiring less precise coding. Moreover, we found that the best predictor of human performance in orthographic priming experiments was within-word ranking of conditional probabilities, rather than average conditional probabilities. We conclude that from an optimality perspective, readers learn to select certain contiguous and non-contiguous letter combinations as information that provides the best cue to word identity. PMID:21734901

  4. Revisiting the Physico-Chemical Hypothesis of Code Origin: An Analysis Based on Code-Sequence Coevolution in a Finite Population

    NASA Astrophysics Data System (ADS)

    Bandhu, Ashutosh Vishwa; Aggarwal, Neha; Sengupta, Supratim

    2013-12-01

    The origin of the genetic code marked a major transition from a plausible RNA world to the world of DNA and proteins and is an important milestone in our understanding of the origin of life. We examine the efficacy of the physico-chemical hypothesis of code origin by carrying out simulations of code-sequence coevolution in finite populations in stages, leading first to the emergence of ten amino acid code(s) and subsequently to 14 amino acid code(s). We explore two different scenarios of primordial code evolution. In one scenario, competition occurs between populations of equilibrated code-sequence sets while in another scenario; new codes compete with existing codes as they are gradually introduced into the population with a finite probability. In either case, we find that natural selection between competing codes distinguished by differences in the degree of physico-chemical optimization is unable to explain the structure of the standard genetic code. The code whose structure is most consistent with the standard genetic code is often not among the codes that have a high fixation probability. However, we find that the composition of the code population affects the code fixation probability. A physico-chemically optimized code gets fixed with a significantly higher probability if it competes against a set of randomly generated codes. Our results suggest that physico-chemical optimization may not be the sole driving force in ensuring the emergence of the standard genetic code.

  5. Bare Code Reader

    NASA Astrophysics Data System (ADS)

    Clair, Jean J.

    1980-05-01

    The Bare code system will be used, in every market and supermarket. The code, which is normalised in US and Europe (code EAN) gives informations on price, storage, nature and allows in real time the gestion of theshop.

  6. Topological Surface States in Dense Solid Hydrogen.

    PubMed

    Naumov, Ivan I; Hemley, Russell J

    2016-11-11

    Metallization of dense hydrogen and associated possible high-temperature superconductivity represents one of the key problems of physics. Recent theoretical studies indicate that before becoming a good metal, compressed solid hydrogen passes through a semimetallic stage. We show that such semimetallic phases predicted to be the most stable at multimegabar (∼300  GPa) pressures are not conventional semimetals: they exhibit topological metallic surface states inside the bulk "direct" gap in the two-dimensional surface Brillouin zone; that is, metallic surfaces may appear even when the bulk of the material remains insulating. Examples include hydrogen in the Cmca-12 and Cmca-4 structures; Pbcn hydrogen also has metallic surface states but they are of a nontopological nature. The results provide predictions for future measurements, including probes of possible surface superconductivity in dense hydrogen.

  7. Topological Surface States in Dense Solid Hydrogen

    NASA Astrophysics Data System (ADS)

    Naumov, Ivan I.; Hemley, Russell J.

    2016-11-01

    Metallization of dense hydrogen and associated possible high-temperature superconductivity represents one of the key problems of physics. Recent theoretical studies indicate that before becoming a good metal, compressed solid hydrogen passes through a semimetallic stage. We show that such semimetallic phases predicted to be the most stable at multimegabar (˜300 GPa ) pressures are not conventional semimetals: they exhibit topological metallic surface states inside the bulk "direct" gap in the two-dimensional surface Brillouin zone; that is, metallic surfaces may appear even when the bulk of the material remains insulating. Examples include hydrogen in the Cmca-12 and Cmca-4 structures; Pbcn hydrogen also has metallic surface states but they are of a nontopological nature. The results provide predictions for future measurements, including probes of possible surface superconductivity in dense hydrogen.

  8. Dense Deposit Disease and C3 Glomerulopathy

    PubMed Central

    Barbour, Thomas D.; Pickering, Matthew C.; Terence Cook, H.

    2013-01-01

    Summary C3 glomerulopathy refers to those renal lesions characterized histologically by predominant C3 accumulation within the glomerulus, and pathogenetically by aberrant regulation of the alternative pathway of complement. Dense deposit disease is distinguished from other forms of C3 glomerulopathy by its characteristic appearance on electron microscopy. The extent to which dense deposit disease also differs from other forms of C3 glomerulopathy in terms of clinical features, natural history, and outcomes of treatment including renal transplantation is less clear. We discuss the pathophysiology of C3 glomerulopathy, with evidence for alternative pathway dysregulation obtained from affected individuals and complement factor H (Cfh)-deficient animal models. Recent linkage studies in familial C3 glomerulopathy have shown genomic rearrangements in the Cfh-related genes, for which the novel pathophysiologic concept of Cfh deregulation has been proposed. PMID:24161036

  9. Active fluidization in dense glassy systems.

    PubMed

    Mandal, Rituparno; Bhuyan, Pranab Jyoti; Rao, Madan; Dasgupta, Chandan

    2016-07-20

    Dense soft glasses show strong collective caging behavior at sufficiently low temperatures. Using molecular dynamics simulations of a model glass former, we show that the incorporation of activity or self-propulsion, f0, can induce cage breaking and fluidization, resulting in the disappearance of the glassy phase beyond a critical f0. The diffusion coefficient crosses over from being strongly to weakly temperature dependent as f0 is increased. In addition, we demonstrate that activity induces a crossover from a fragile to a strong glass and a tendency of active particles to cluster. Our results are of direct relevance to the collective dynamics of dense active colloidal glasses and to recent experiments on tagged particle diffusion in living cells.

  10. Hydrodynamic stellar interactions in dense star clusters

    NASA Technical Reports Server (NTRS)

    Rasio, Frederic A.

    1993-01-01

    Highly detailed HST observations of globular-cluster cores and galactic nuclei motivate new theoretical studies of the violent dynamical processes which govern the evolution of these very dense stellar systems. These processes include close stellar encounters and direct physical collisions between stars. Such hydrodynamic stellar interactions are thought to explain the large populations of blue stragglers, millisecond pulsars, X-ray binaries, and other peculiar sources observed in globular clusters. Three-dimensional hydrodynamics techniques now make it possible to perform realistic numerical simulations of these interactions. The results, when combined with those of N-body simulations of stellar dynamics, should provide for the first time a realistic description of dense star clusters. Here I review briefly current theoretical work on hydrodynamic stellar interactions, emphasizing its relevance to recent observations.

  11. Static dielectric properties of dense ionic fluids.

    PubMed

    Zarubin, Grigory; Bier, Markus

    2015-05-14

    The static dielectric properties of dense ionic fluids, e.g., room temperature ionic liquids (RTILs) and inorganic fused salts, are investigated on different length scales by means of grandcanonical Monte Carlo simulations. A generally applicable scheme is developed which allows one to approximately decompose the electric susceptibility of dense ionic fluids into the orientation and the distortion polarization contribution. It is shown that at long range, the well-known plasma-like perfect screening behavior occurs, which corresponds to a diverging distortion susceptibility, whereas at short range, orientation polarization dominates, which coincides with that of a dipolar fluid of attached cation-anion pairs. This observation suggests that the recently debated interpretation of RTILs as dilute electrolyte solutions might not be simply a yes-no-question but it might depend on the considered length scale.

  12. Impacts by Compact Ultra Dense Objects

    NASA Astrophysics Data System (ADS)

    Birrell, Jeremey; Labun, Lance; Rafelski, Johann

    2012-03-01

    We propose to search for nuclear density or greater compact ultra dense objects (CUDOs), which could constitute a significant fraction of the dark matter [1]. Considering their high density, the gravitational tidal forces are significant and atomic-density matter cannot stop an impacting CUDO, which punctures the surface of the target body, pulverizing, heating and entraining material near its trajectory through the target [2]. Because impact features endure over geologic timescales, the Earth, Moon, Mars, Mercury and large asteroids are well-suited to act as time-integrating CUDO detectors. There are several potential candidates for CUDO structure such as strangelet fragments or more generally dark matter if mechanisms exist for it to form compact objects. [4pt] [1] B. J. Carr, K. Kohri, Y. Sendouda, & J.'i. Yokoyama, Phys. Rev. D81, 104019 (2010). [0pt] [2] L. Labun, J. Birrell, J. Rafelski, Solar System Signatures of Impacts by Compact Ultra Dense Objects, arXiv:1104.4572.

  13. Quantum kinetic equation for nonequilibrium dense systems

    NASA Astrophysics Data System (ADS)

    Morozov, V. G.; Röpke, G.

    1995-02-01

    Using the density matrix method in the form developed by Zubarev, equations of motion for nonequilibrium quantum systems with continuous short range interactions are derived which describe kinetic and hydrodynamic processes in a consistent way. The T-matrix as well as the two-particle density matrix determining the nonequilibrium collision integral are obtained in the ladder approximation including the Hartree-Fock corrections and the Pauli blocking for intermediate states. It is shown that in this approximation the total energy is conserved. The developed approach to the kinetic theory of dense quantum systems is able to reproduce the virial corrections consistent with the generalized Beth-Uhlenbeck approximation in equilibrium. The contribution of many-particle correlations to the drift term in the quantum kinetic equation for dense systems is discussed.

  14. PHOTOCHEMICAL HEATING OF DENSE MOLECULAR GAS

    SciTech Connect

    Glassgold, A. E.; Najita, J. R.

    2015-09-10

    Photochemical heating is analyzed with an emphasis on the heating generated by chemical reactions initiated by the products of photodissociation and photoionization. The immediate products are slowed down by collisions with the ambient gas and then heat the gas. In addition to this direct process, heating is also produced by the subsequent chemical reactions initiated by these products. Some of this chemical heating comes from the kinetic energy of the reaction products and the rest from collisional de-excitation of the product atoms and molecules. In considering dense gas dominated by molecular hydrogen, we find that the chemical heating is sometimes as large, if not much larger than, the direct heating. In very dense gas, the total photochemical heating approaches 10 eV per photodissociation (or photoionization), competitive with other ways of heating molecular gas.

  15. Generalized concatenated quantum codes

    SciTech Connect

    Grassl, Markus; Shor, Peter; Smith, Graeme; Smolin, John; Zeng Bei

    2009-05-15

    We discuss the concept of generalized concatenated quantum codes. This generalized concatenation method provides a systematical way for constructing good quantum codes, both stabilizer codes and nonadditive codes. Using this method, we construct families of single-error-correcting nonadditive quantum codes, in both binary and nonbinary cases, which not only outperform any stabilizer codes for finite block length but also asymptotically meet the quantum Hamming bound for large block length.

  16. Shear dispersion in dense granular flows

    SciTech Connect

    Christov, Ivan C.; Stone, Howard A.

    2014-04-18

    We formulate and solve a model problem of dispersion of dense granular materials in rapid shear flow down an incline. The effective dispersivity of the depth-averaged concentration of the dispersing powder is shown to vary as the Péclet number squared, as in classical Taylor–Aris dispersion of molecular solutes. An extension to generic shear profiles is presented, and possible applications to industrial and geological granular flows are noted.

  17. Structures for dense, crack free thin films

    DOEpatents

    Jacobson, Craig P [Lafayette, CA; Visco, Steven J [Berkeley, CA; De Jonghe, Lutgard C [Lafayette, CA

    2011-03-08

    The process described herein provides a simple and cost effective method for making crack free, high density thin ceramic film. The steps involve depositing a layer of a ceramic material on a porous or dense substrate. The deposited layer is compacted and then the resultant laminate is sintered to achieve a higher density than would have been possible without the pre-firing compaction step.

  18. Oxygen ion-conducting dense ceramic

    DOEpatents

    Balachandran, Uthamalingam; Kleefisch, Mark S.; Kobylinski, Thaddeus P.; Morissette, Sherry L.; Pei, Shiyou

    1998-01-01

    Preparation, structure, and properties of mixed metal oxide compositions and their uses are described. Mixed metal oxide compositions of the invention have stratified crystalline structure identifiable by means of powder X-ray diffraction patterns. In the form of dense ceramic membranes, the present compositions demonstrate an ability to separate oxygen selectively from a gaseous mixture containing oxygen and one or more other volatile components by means of ionic conductivities.

  19. Shear dispersion in dense granular flows

    DOE PAGES

    Christov, Ivan C.; Stone, Howard A.

    2014-04-18

    We formulate and solve a model problem of dispersion of dense granular materials in rapid shear flow down an incline. The effective dispersivity of the depth-averaged concentration of the dispersing powder is shown to vary as the Péclet number squared, as in classical Taylor–Aris dispersion of molecular solutes. An extension to generic shear profiles is presented, and possible applications to industrial and geological granular flows are noted.

  20. Confined magnetic monopoles in dense QCD

    NASA Astrophysics Data System (ADS)

    Gorsky, A.; Shifman, M.; Yung, A.

    2011-04-01

    Non-Abelian strings exist in the color-flavor locked phase of dense QCD. We show that kinks appearing in the world-sheet theory on these strings, in the form of the kink-antikink bound pairs, are the magnetic monopoles—descendants of the ’t Hooft-Polyakov monopoles surviving in such a special form in dense QCD. Our consideration is heavily based on analogies and inspiration coming from certain supersymmetric non-Abelian theories. This is the first ever analytic demonstration that objects unambiguously identifiable as the magnetic monopoles are native to non-Abelian Yang-Mills theories (albeit our analysis extends only to the phase of the monopole confinement and has nothing to say about their condensation). Technically, our demonstration becomes possible due to the fact that low-energy dynamics of the non-Abelian strings in dense QCD is that of the orientational zero modes. It is described by an effective two-dimensional CP(2) model on the string world sheet. The kinks in this model representing confined magnetic monopoles are in a highly quantum regime.