Optimal probabilistic dense coding schemes
NASA Astrophysics Data System (ADS)
Kögler, Roger A.; Neves, Leonardo
2017-04-01
Dense coding with non-maximally entangled states has been investigated in many different scenarios. We revisit this problem for protocols adopting the standard encoding scheme. In this case, the set of possible classical messages cannot be perfectly distinguished due to the non-orthogonality of the quantum states carrying them. So far, the decoding process has been approached in two ways: (i) The message is always inferred, but with an associated (minimum) error; (ii) the message is inferred without error, but only sometimes; in case of failure, nothing else is done. Here, we generalize on these approaches and propose novel optimal probabilistic decoding schemes. The first uses quantum-state separation to increase the distinguishability of the messages with an optimal success probability. This scheme is shown to include (i) and (ii) as special cases and continuously interpolate between them, which enables the decoder to trade-off between the level of confidence desired to identify the received messages and the success probability for doing so. The second scheme, called multistage decoding, applies only for qudits ( d-level quantum systems with d>2) and consists of further attempts in the state identification process in case of failure in the first one. We show that this scheme is advantageous over (ii) as it increases the mutual information between the sender and receiver.
DENSE MEDIA CYCLONE OPTIMIZATION
Gerald H. Luttrell
2002-01-14
During the past quarter, float-sink analyses were completed for four of seven circuits evaluated in this project. According to the commercial laboratory, the analyses for the remaining three sites will be finished by mid February 2002. In addition, it was necessary to repeat several of the float-sink tests to resolve problems identified during the analysis of the experimental data. In terms of accomplishments, a website is being prepared to distribute project findings and software to the public. This site will include (i) an operators manual for HMC operation and maintenance (already available in hard copy), (ii) an expert system software package for evaluating and optimizing HMC performance (in development), and (iii) a spreadsheet-based process model for plant designers (in development). Several technology transfer activities were also carried out including the publication of project results in proceedings and the training of plant operations via workshops.
Dense Coding in a Two-Spin Squeezing Model with Intrinsic Decoherence
NASA Astrophysics Data System (ADS)
Zhang, Bing-Bing; Yang, Guo-Hui
2016-11-01
Quantum dense coding in a two-spin squeezing model under intrinsic decoherence with different initial states (Werner state and Bell state) is investigated. It shows that dense coding capacity χ oscillates with time and finally reaches different stable values. χ can be enhanced by decreasing the magnetic field Ω and the intrinsic decoherence γ or increasing the squeezing interaction μ, moreover, one can obtain a valid dense coding capacity ( χ satisfies χ > 1) by modulating these parameters. The stable value of χ reveals that the decoherence cannot entirely destroy the dense coding capacity. In addition, decreasing Ω or increasing μ can not only enhance the stable value of χ but also impair the effects of decoherence. As the initial state is the Werner state, the purity r of initial state plays a key role in adjusting the value of dense coding capacity, χ can be significantly increased by improving the purity of initial state. For the initial state is Bell state, the large spin squeezing interaction compared with the magnetic field guarantees the optimal dense coding. One cannot always achieve a valid dense coding capacity for the Werner state, while for the Bell state, the dense coding capacity χ remains stuck at the range of greater than 1.
Relating quantum discord with the quantum dense coding capacity
Wang, Xin; Qiu, Liang Li, Song; Zhang, Chi; Ye, Bin
2015-01-15
We establish the relations between quantum discord and the quantum dense coding capacity in (n + 1)-particle quantum states. A necessary condition for the vanishing discord monogamy score is given. We also find that the loss of quantum dense coding capacity due to decoherence is bounded below by the sum of quantum discord. When these results are restricted to three-particle quantum states, some complementarity relations are obtained.
Deterministic dense coding and faithful teleportation with multipartite graph states
Huang, C.-Y.; Yu, I-C.; Lin, F.-L.; Hsu, L.-Y.
2009-05-15
We propose schemes to perform the deterministic dense coding and faithful teleportation with multipartite graph states. We also find the sufficient and necessary condition of a viable graph state for the proposed schemes. That is, for the associated graph, the reduced adjacency matrix of the Tanner-type subgraph between senders and receivers should be invertible.
DISH CODE A deeply simplified hydrodynamic code for applications to warm dense matter
More, Richard
2007-08-22
DISH is a 1-dimensional (planar) Lagrangian hydrodynamic code intended for application to experiments on warm dense matter. The code is a simplified version of the DPC code written in the Data and Planning Center of the National Institute for Fusion Science in Toki, Japan. DPC was originally intended as a testbed for exploring equation of state and opacity models, but turned out to have a variety of applications. The Dish code is a "deeply simplified hydrodynamic" code, deliberately made as simple as possible. It is intended to be easy to understand, easy to use and easy to change.
Optimizing Dense Plasma Focus Neutron Yields with Fast Gas Jets
NASA Astrophysics Data System (ADS)
McMahon, Matthew; Kueny, Christopher; Stein, Elizabeth; Link, Anthony; Schmidt, Andrea
2016-10-01
We report a study using the particle-in-cell code LSP to perform fully kinetic simulations modeling dense plasma focus (DPF) devices with high density gas jets on axis. The high density jet models fast gas puffs which allow for more mass on axis while maintaining the optimal pressure for the DPF. As the density of the jet compared to the background fill increases we find the neutron yield increases, as does the variability in the neutron yield. Introducing perturbations in the jet density allow for consistent seeding of the m =0 instability leading to more consistent ion acceleration and higher neutron yields with less variability. Jets with higher on axis density are found to have the greatest yield. The optimal jet configuration is explored. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Teleportation and dense coding with genuine multipartite entanglement.
Yeo, Ye; Chua, Wee Kang
2006-02-17
We present an explicit protocol E0 for faithfully teleporting an arbitrary two-qubit state via a genuine four-qubit entangled state. By construction, our four-partite state is not reducible to a pair of Bell states. Its properties are compared and contrasted with those of the four-party Greenberger-Horne-Zeilinger and W states. We also give a dense coding scheme D0 involving our state as a shared resource of entanglement. Both D0 and E0 indicate that our four-qubit state is a likely candidate for the genuine four-partite analogue to a Bell state.
Modular optimization code package: MOZAIK
NASA Astrophysics Data System (ADS)
Bekar, Kursat B.
This dissertation addresses the development of a modular optimization code package, MOZAIK, for geometric shape optimization problems in nuclear engineering applications. MOZAIK's first mission, determining the optimal shape of the D2O moderator tank for the current and new beam tube configurations for the Penn State Breazeale Reactor's (PSBR) beam port facility, is used to demonstrate its capabilities and test its performance. MOZAIK was designed as a modular optimization sequence including three primary independent modules: the initializer, the physics and the optimizer, each having a specific task. By using fixed interface blocks among the modules, the code attains its two most important characteristics: generic form and modularity. The benefit of this modular structure is that the contents of the modules can be switched depending on the requirements of accuracy, computational efficiency, or compatibility with the other modules. Oak Ridge National Laboratory's discrete ordinates transport code TORT was selected as the transport solver in the physics module of MOZAIK, and two different optimizers, Min-max and Genetic Algorithms (GA), were implemented in the optimizer module of the code package. A distributed memory parallelism was also applied to MOZAIK via MPI (Message Passing Interface) to execute the physics module concurrently on a number of processors for various states in the same search. Moreover, dynamic scheduling was enabled to enhance load balance among the processors while running MOZAIK's physics module thus improving the parallel speedup and efficiency. In this way, the total computation time consumed by the physics module is reduced by a factor close to M, where M is the number of processors. This capability also encourages the use of MOZAIK for shape optimization problems in nuclear applications because many traditional codes related to radiation transport do not have parallel execution capability. A set of computational models based on the
Power System Optimization Codes Modified
NASA Technical Reports Server (NTRS)
Juhasz, Albert J.
1999-01-01
A major modification of and addition to existing Closed Brayton Cycle (CBC) space power system optimization codes was completed. These modifications relate to the global minimum mass search driver programs containing three nested iteration loops comprising iterations on cycle temperature ratio, and three separate pressure ratio iteration loops--one loop for maximizing thermodynamic efficiency, one for minimizing radiator area, and a final loop for minimizing overall power system mass. Using the method of steepest ascent, the code sweeps through the pressure ratio space repeatedly, each time with smaller iteration step sizes, so that the three optimum pressure ratios can be obtained to any desired accuracy for each of the objective functions referred to above (i.e., maximum thermodynamic efficiency, minimum radiator area, and minimum system mass). Two separate options for the power system heat source are available: 1. A nuclear fission reactor can be used. It is provided with a radiation shield 1. (composed of a lithium hydride (LiH) neutron shield and tungsten (W) gamma shield). Suboptions can be used to select the type of reactor (i.e., fast spectrum liquid metal cooled or epithermal high-temperature gas reactor (HTGR)). 2. A solar heat source can be used. This option includes a parabolic concentrator and heat receiver for raising the temperature of the recirculating working fluid. A useful feature of the code modifications is that key cycle parameters are displayed, including the overall system specific mass in kilograms per kilowatt and the system specific power in watts per kilogram, as the results for each temperature ratio are computed. As the minimum mass temperature ratio is encountered, a message is printed out. Several levels of detailed information on cycle state points, subsystem mass results, and radiator temperature profiles are stored for this temperature ratio condition and can be displayed or printed by users.
TRACKING CODE DEVELOPMENT FOR BEAM DYNAMICS OPTIMIZATION
Yang, L.
2011-03-28
Dynamic aperture (DA) optimization with direct particle tracking is a straight forward approach when the computing power is permitted. It can have various realistic errors included and is more close than theoretical estimations. In this approach, a fast and parallel tracking code could be very helpful. In this presentation, we describe an implementation of storage ring particle tracking code TESLA for beam dynamics optimization. It supports MPI based parallel computing and is robust as DA calculation engine. This code has been used in the NSLS-II dynamics optimizations and obtained promising performance.
Optimal Codes for the Burst Erasure Channel
NASA Technical Reports Server (NTRS)
Hamkins, Jon
2010-01-01
Deep space communications over noisy channels lead to certain packets that are not decodable. These packets leave gaps, or bursts of erasures, in the data stream. Burst erasure correcting codes overcome this problem. These are forward erasure correcting codes that allow one to recover the missing gaps of data. Much of the recent work on this topic concentrated on Low-Density Parity-Check (LDPC) codes. These are more complicated to encode and decode than Single Parity Check (SPC) codes or Reed-Solomon (RS) codes, and so far have not been able to achieve the theoretical limit for burst erasure protection. A block interleaved maximum distance separable (MDS) code (e.g., an SPC or RS code) offers near-optimal burst erasure protection, in the sense that no other scheme of equal total transmission length and code rate could improve the guaranteed correctible burst erasure length by more than one symbol. The optimality does not depend on the length of the code, i.e., a short MDS code block interleaved to a given length would perform as well as a longer MDS code interleaved to the same overall length. As a result, this approach offers lower decoding complexity with better burst erasure protection compared to other recent designs for the burst erasure channel (e.g., LDPC codes). A limitation of the design is its lack of robustness to channels that have impairments other than burst erasures (e.g., additive white Gaussian noise), making its application best suited for correcting data erasures in layers above the physical layer. The efficiency of a burst erasure code is the length of its burst erasure correction capability divided by the theoretical upper limit on this length. The inefficiency is one minus the efficiency. The illustration compares the inefficiency of interleaved RS codes to Quasi-Cyclic (QC) LDPC codes, Euclidean Geometry (EG) LDPC codes, extended Irregular Repeat Accumulate (eIRA) codes, array codes, and random LDPC codes previously proposed for burst erasure
Optimal interference code based on machine learning
NASA Astrophysics Data System (ADS)
Qian, Ye; Chen, Qian; Hu, Xiaobo; Cao, Ercong; Qian, Weixian; Gu, Guohua
2016-10-01
In this paper, we analyze the characteristics of pseudo-random code, by the case of m sequence. Depending on the description of coding theory, we introduce the jamming methods. We simulate the interference effect or probability model by the means of MATLAB to consolidate. In accordance with the length of decoding time the adversary spends, we find out the optimal formula and optimal coefficients based on machine learning, then we get the new optimal interference code. First, when it comes to the phase of recognition, this study judges the effect of interference by the way of simulating the length of time over the decoding period of laser seeker. Then, we use laser active deception jamming simulate interference process in the tracking phase in the next block. In this study we choose the method of laser active deception jamming. In order to improve the performance of the interference, this paper simulates the model by MATLAB software. We find out the least number of pulse intervals which must be received, then we can make the conclusion that the precise interval number of the laser pointer for m sequence encoding. In order to find the shortest space, we make the choice of the greatest common divisor method. Then, combining with the coding regularity that has been found before, we restore pulse interval of pseudo-random code, which has been already received. Finally, we can control the time period of laser interference, get the optimal interference code, and also increase the probability of interference as well.
Optimal patch code design via device characterization
NASA Astrophysics Data System (ADS)
Wu, Wencheng; Dalal, Edul N.
2012-01-01
In many color measurement applications, such as those for color calibration and profiling, "patch code" has been used successfully for job identification and automation to reduce operator errors. A patch code is similar to a barcode, but is intended primarily for use in measurement devices that cannot read barcodes due to limited spatial resolution, such as spectrophotometers. There is an inherent tradeoff between decoding robustness and the number of code levels available for encoding. Previous methods have attempted to address this tradeoff, but those solutions have been sub-optimal. In this paper, we propose a method to design optimal patch codes via device characterization. The tradeoff between decoding robustness and the number of available code levels is optimized in terms of printing and measurement efforts, and decoding robustness against noises from the printing and measurement devices. Effort is drastically reduced relative to previous methods because print-and-measure is minimized through modeling and the use of existing printer profiles. Decoding robustness is improved by distributing the code levels in CIE Lab space rather than in CMYK space.
Cross-code comparisons of mixing during the implosion of dense cylindrical and spherical shells
NASA Astrophysics Data System (ADS)
Joggerst, C. C.; Nelson, Anthony; Woodward, Paul; Lovekin, Catherine; Masser, Thomas; Fryer, Chris L.; Ramaprabhu, P.; Francois, Marianne; Rockefeller, Gabriel
2014-10-01
We present simulations of the implosion of a dense shell in two-dimensional (2D) spherical and cylindrical geometry performed with four different compressible, Eulerian codes: RAGE, FLASH, CASTRO, and PPM. We follow the growth of instabilities on the inner face of the dense shell. Three codes employed Cartesian grid geometry, and one (FLASH) employed polar grid geometry. While the codes are similar, they employ different advection algorithms, limiters, adaptive mesh refinement (AMR) schemes, and interface-preservation techniques. We find that the growth rate of the instability is largely insensitive to the choice of grid geometry or other implementation details specific to an individual code, provided the grid resolution is sufficiently fine. Overall, all simulations from different codes compare very well on the fine grids for which we tested them, though they show slight differences in small-scale mixing. Simulations produced by codes that explicitly limit numerical diffusion show a smaller amount of small-scale mixing than codes that do not. This difference is most prominent for low-mode perturbations where little instability finger interaction takes place, and less prominent for high- or multi-mode simulations where a great deal of interaction takes place, though it is still present. We present RAGE and FLASH simulations to quantify the initial perturbation amplitude to wavelength ratio at which metrics of mixing agree across codes, and find that bubble/spike amplitudes are converged for low-mode and high-mode simulations in which the perturbation amplitude is more than 1% and 5% of the wavelength of the perturbation, respectively. Other metrics of small-scale mixing depend on details of multi-fluid advection and do not converge between codes for the resolutions that were accessible.
Efficient simultaneous dense coding and teleportation with two-photon four-qubit cluster states
NASA Astrophysics Data System (ADS)
Zhang, Cai; Situ, Haozhen; Li, Qin; He, Guang Ping
2016-08-01
We firstly propose a simultaneous dense coding protocol with two-photon four-qubit cluster states in which two receivers can simultaneously get their respective classical information sent by a sender. Because each photon has two degrees of freedom, the protocol will achieve a high transmittance. The security of the simultaneous dense coding protocol has also been analyzed. Secondly, we investigate how to simultaneously teleport two different quantum states with polarization and path degree of freedom using cluster states to two receivers, respectively, and discuss its security. The preparation and transmission of two-photon four-qubit cluster states is less difficult than that of four-photon entangled states, and it has been experimentally generated with nearly perfect fidelity and high generation rate. Thus, our protocols are feasible with current quantum techniques.
Development and Benchmarking of a Hybrid PIC Code For Dense Plasmas and Fast Ignition
Witherspoon, F. Douglas; Welch, Dale R.; Thompson, John R.; MacFarlane, Joeseph J.; Phillips, Michael W.; Bruner, Nicki; Mostrom, Chris; Thoma, Carsten; Clark, R. E.; Bogatu, Nick; Kim, Jin-Soo; Galkin, Sergei; Golovkin, Igor E.; Woodruff, P. R.; Wu, Linchun; Messer, Sarah J.
2014-05-20
Radiation processes play an important role in the study of both fast ignition and other inertial confinement schemes, such as plasma jet driven magneto-inertial fusion, both in their effect on energy balance, and in generating diagnostic signals. In the latter case, warm and hot dense matter may be produced by the convergence of a plasma shell formed by the merging of an assembly of high Mach number plasma jets. This innovative approach has the potential advantage of creating matter of high energy densities in voluminous amount compared with high power lasers or particle beams. An important application of this technology is as a plasma liner for the flux compression of magnetized plasma to create ultra-high magnetic fields and burning plasmas. HyperV Technologies Corp. has been developing plasma jet accelerator technology in both coaxial and linear railgun geometries to produce plasma jets of sufficient mass, density, and velocity to create such imploding plasma liners. An enabling tool for the development of this technology is the ability to model the plasma dynamics, not only in the accelerators themselves, but also in the resulting magnetized target plasma and within the merging/interacting plasma jets during transport to the target. Welch pioneered numerical modeling of such plasmas (including for fast ignition) using the LSP simulation code. Lsp is an electromagnetic, parallelized, plasma simulation code under development since 1995. It has a number of innovative features making it uniquely suitable for modeling high energy density plasmas including a hybrid fluid model for electrons that allows electrons in dense plasmas to be modeled with a kinetic or fluid treatment as appropriate. In addition to in-house use at Voss Scientific, several groups carrying out research in Fast Ignition (LLNL, SNL, UCSD, AWE (UK), and Imperial College (UK)) also use LSP. A collaborative team consisting of HyperV Technologies Corp., Voss Scientific LLC, FAR-TECH, Inc., Prism
Rate-distortion optimized adaptive transform coding
NASA Astrophysics Data System (ADS)
Lim, Sung-Chang; Kim, Dae-Yeon; Jeong, Seyoon; Choi, Jin Soo; Choi, Haechul; Lee, Yung-Lyul
2009-08-01
We propose a rate-distortion optimized transform coding method that adaptively employs either integer cosine transform that is an integer-approximated version of discrete cosine transform (DCT) or integer sine transform (IST) in a rate-distortion sense. The DCT that has been adopted in most video-coding standards is known as a suboptimal substitute for the Karhunen-Loève transform. However, according to the correlation of a signal, an alternative transform can achieve higher coding efficiency. We introduce a discrete sine transform (DST) that achieves the high-energy compactness in a correlation coefficient range of -0.5 to 0.5 and is applied to the current design of H.264/AVC (advanced video coding). Moreover, to avoid the encoder and decoder mismatch and make the implementation simple, an IST that is an integer-approximated version of the DST is developed. The experimental results show that the proposed method achieves a Bjøntegaard Delta-RATE gain up to 5.49% compared to Joint model 11.0.
Optimizing Extender Code for NCSX Analyses
M. Richman, S. Ethier, and N. Pomphrey
2008-01-22
Extender is a parallel C++ code for calculating the magnetic field in the vacuum region of a stellarator. The code was optimized for speed and augmented with tools to maintain a specialized NetCDF database. Two parallel algorithms were examined. An even-block work-distribution scheme was comparable in performance to a master-slave scheme. Large speedup factors were achieved by representing the plasma surface with a spline rather than Fourier series. The accuracy of this representation and the resulting calculations relied on the density of the spline mesh. The Fortran 90 module db access was written to make it easy to store Extender output in a manageable database. New or updated data can be added to existing databases. A generalized PBS job script handles the generation of a database from scratch
Optimal zone coding using the slant transform
Zadiraka, V.K.; Evtushenko, V.N.
1995-03-01
Discrete orthogonal transforms (DOTs) are widely used in digital signal processing, image coding and compression, systems theory, communication, and control. A special representative of the class of DOTs with nonsinusoidal basis functions is the slant transform, which is distinguished by the presence of a slanted vector with linearly decreasing components in its basis. The slant transform of fourth and eighth orders was introduced in 1971 by Enomoto and Shibata especially for efficient representation of the video signal in line sections with smooth variation of brightness. It has been used for television image coding. Pratt, Chen, and Welch generalized the slant transform to vectors of any dimension N = 2{sup n} and two-dimensional arrays, and derived posterior estimates of reconstruction error with zonal image compression (the zones were chosen by trial and error) for various transforms. These estimates show that, for the same N and the same compression ratio {tau}, the slant transform is inferior to the Karhunen - Loeve transform and superior to Walsh and Fourier transforms. In this paper, we derive prior estimates of the reconstruction error for the slant transform in zone coding and suggest an optimal technique for zone selection.
Optimization Principles for the Neural Code
NASA Astrophysics Data System (ADS)
Deweese, Michael Robert
1995-01-01
Animals receive information from the world in the form of continuous functions of time. At a very early stage in processing, however, these continuous signals are converted into discrete sequences of identical "spikes". All information that the brain receives about the outside world is encoded in the arrival times of these spikes. The goal of this thesis is to determine if there is a universal principle at work in this neural code. We are motivated by several recent experiments on a wide range of sensory systems which share four main features: High information rates, moderate signal to noise ratio, efficient use of the spike train entropy to encode the signal, and the ability to extract nearly all the information encoded in the spike train with a linear response function triggered by the spikes. We propose that these features can be understood in terms of codes "designed" to maximize information flow. To test this idea, we use the fact that any point process encoding of an analog signal embedded in noise can be written in the language of a threshold crossing model to develop a systematic expansion for the transmitted information about the Poisson limit--the limit where there are no correlations between the spikes. All codes take the same simple form in the Poisson limit, and all of the seemingly unrelated features of the data arise naturally when we optimize a simple linear filtered threshold crossing model. We make a new prediction: Finding the optimum requires adaptation to the statistical structure of the signal and noise, not just to DC offsets. The only disagreement we find is that real neurons outperform our model in the task it was optimized for--they transmit much more information. We then place an upper bound on the amount of information available from the leading term in the Poisson expansion for any possible encoding, and find that real neurons do exceedingly well even by this standard. We conclude that several important features of the neural code can
Optimality principles for the visual code
NASA Astrophysics Data System (ADS)
Pitkow, Xaq
One way to try to make sense of the complexities of our visual system is to hypothesize that evolution has developed nearly optimal solutions to the problems organisms face in the environment. In this thesis, we study two such principles of optimality for the visual code. In the first half of this dissertation, we consider the principle of decorrelation. Influential theories assert that the center-surround receptive fields of retinal neurons remove spatial correlations present in the visual world. It has been proposed that this decorrelation serves to maximize information transmission to the brain by avoiding transfer of redundant information through optic nerve fibers of limited capacity. While these theories successfully account for several aspects of visual perception, the notion that the outputs of the retina are less correlated than its inputs has never been directly tested at the site of the putative information bottleneck, the optic nerve. We presented visual stimuli with naturalistic image correlations to the salamander retina while recording responses of many retinal ganglion cells using a microelectrode array. The output signals of ganglion cells are indeed decorrelated compared to the visual input, but the receptive fields are only partly responsible. Much of the decorrelation is due to the nonlinear processing by neurons rather than the linear receptive fields. This form of decorrelation dramatically limits information transmission. Instead of improving coding efficiency we show that the nonlinearity is well suited to enable a combinatorial code or to signal robust stimulus features. In the second half of this dissertation, we develop an ideal observer model for the task of discriminating between two small stimuli which move along an unknown retinal trajectory induced by fixational eye movements. The ideal observer is provided with the responses of a model retina and guesses the stimulus identity based on the maximum likelihood rule, which involves sums
Dense codes at high speeds: varying stimulus properties to improve visual speller performance.
Geuze, Jeroen; Farquhar, Jason D R; Desain, Peter
2012-02-01
This paper investigates the effect of varying different stimulus properties on performance of the visual speller. Each of the different stimulus properties has been tested in previous literature and has a known effect on visual speller performance. This paper investigates whether a combination of these types of stimuli can lead to a greater improvement. It describes an experiment aimed at answering the following questions. (i) Does visual speller performance suffer from high stimulus rates? (ii) Does an increase in stimulus rate lead to a lower training time for an online visual speller? (iii) What aspect of the difference in the event related potential to a flash or a flip stimulus causes the increase in accuracy. (iv) Can an error-correcting (dense) stimulus code overcome the reduction in performance associated with decreasing target-to-target intervals? We found that higher stimulus rates can improve the visual speller performance and can lead to less time required to train the system. We also found that a proper stimulus code can overcome the stronger response to rows and columns, but cannot greatly improve speller performance.
New optimal asymmetric quantum codes constructed from constacyclic codes
NASA Astrophysics Data System (ADS)
Xu, Gen; Li, Ruihu; Guo, Luobin; Lü, Liangdong
2017-02-01
In this paper, we propose the construction of asymmetric quantum codes from two families of constacyclic codes over finite field 𝔽q2 of code length n, where for the first family, q is an odd prime power with the form 4t + 1 (t ≥ 1 is integer) or 4t ‑ 1 (t ≥ 2 is integer) and n1 = q2+1 2; for the second family, q is an odd prime power with the form 10t + 3 or 10t + 7 (t ≥ 0 is integer) and n2 = q2+1 5. As a result, families of new asymmetric quantum codes [[n,k,dz/dx
Gschwind, Michael K
2013-07-23
Mechanisms for aggressively optimizing computer code are provided. With these mechanisms, a compiler determines an optimization to apply to a portion of source code and determines if the optimization as applied to the portion of source code will result in unsafe optimized code that introduces a new source of exceptions being generated by the optimized code. In response to a determination that the optimization is an unsafe optimization, the compiler generates an aggressively compiled code version, in which the unsafe optimization is applied, and a conservatively compiled code version in which the unsafe optimization is not applied. The compiler stores both versions and provides them for execution. Mechanisms are provided for switching between these versions during execution in the event of a failure of the aggressively compiled code version. Moreover, predictive mechanisms are provided for predicting whether such a failure is likely.
Construction of a Compact, Low-Inductance, 100 J Dense Plasma Focus for Yield Optimization Studies
NASA Astrophysics Data System (ADS)
Cooper, Christopher; Povilus, Alex; Chapman, Steven; Falabella, Steve; Podpaly, Yuri; Shaw, Brian; Liu, Jason; Schmidt, Andrea
2016-10-01
A new 100 J mini dense plasma focus (DPF) is constructed to optimize neutron yields for a variety of plasma conditions and anode shapes. The device generates neutrons by leveraging instabilities that occur during a z-pinch in a plasma sheath to accelerate a beam of deuterium ions into a background deuterium gas target. The features that distinguish this miniDPF from previous 100 J devices are a compact, engineered electrode geometry and a low-impedance driver. The driving circuit inductance is minimized by mounting the capacitors close to the back of the anode and cathode < 20 cm away, increasing the breakdown current and yields. The anode can rapidly be changed out to test new designs. The neutron yield and 2D images of the visible light emission are compared to simulations with the hybrid kinetic code LSP which can directly simulate the device and anode designs. Initial studies of the sheath physics and neutron yields for a scaling of discharge voltages and neutral fill pressures are presented. Prepared by LLNL under Contract DE-AC52-07NA27344.
Sparse coding based dense feature representation model for hyperspectral image classification
NASA Astrophysics Data System (ADS)
Oguslu, Ender; Zhou, Guoqing; Zheng, Zezhong; Iftekharuddin, Khan; Li, Jiang
2015-11-01
We present a sparse coding based dense feature representation model (a preliminary version of the paper was presented at the SPIE Remote Sensing Conference, Dresden, Germany, 2013) for hyperspectral image (HSI) classification. The proposed method learns a new representation for each pixel in HSI through the following four steps: sub-band construction, dictionary learning, encoding, and feature selection. The new representation usually has a very high dimensionality requiring a large amount of computational resources. We applied the l1/lq regularized multiclass logistic regression technique to reduce the size of the new representation. We integrated the method with a linear support vector machine (SVM) and a composite kernels SVM (CKSVM) to discriminate different types of land cover. We evaluated the proposed algorithm on three well-known HSI datasets and compared our method to four recently developed classification methods: SVM, CKSVM, simultaneous orthogonal matching pursuit, and image fusion and recursive filtering. Experimental results show that the proposed method can achieve better overall and average classification accuracies with a much more compact representation leading to more efficient sparse models for HSI classification.
Optimized quantum error-correction codes for experiments
NASA Astrophysics Data System (ADS)
Nebendahl, V.
2015-02-01
We identify gauge freedoms in quantum error correction (QEC) codes and introduce strategies for optimal control algorithms to find the gauges which allow the easiest experimental realization. Hereby, the optimal gauge depends on the underlying physical system and the available means to manipulate it. The final goal is to obtain optimal decompositions of QEC codes into elementary operations which can be realized with high experimental fidelities. In the first part of this paper, this subject is studied in a general fashion, while in the second part, a system of trapped ions is treated as a concrete example. A detailed optimization algorithm is explained and various decompositions are presented for the three qubit code, the five qubit code, and the seven qubit Steane code.
NASA Astrophysics Data System (ADS)
Wang, Fei; Maimaitiyiming-Tusun; Parouke-Paerhati; Ahmad-Abliz
2015-09-01
The influence of intrinsic decoherence on various correlations and dense coding in a model which consists of two identical superconducting charge qubits coupled by a fixed capacitor is investigated. The results show that, despite the intrinsic decoherence, the correlations as well as the dense coding channel capacity can be effectively increased via the combination of system parameters, i.e., the mutual coupling energy between the two charge qubits is larger than the Josephson energy of the qubit. The bigger the difference between them is, the better the effect is. Project supported by the Project to Develop Outstanding Young Scientific Talents of China (Grant No. 2013711019), the Natural Science Foundation of Xinjiang Province, China (Grant No. 2012211A052), the Foundation for Key Program of Ministry of Education of China (Grant No. 212193), and the Innovative Foundation for Graduate Students Granted by the Key Subjects of Theoretical Physics of Xinjiang Province, China (Grant No. LLWLL201301).
Optimization of KINETICS Chemical Computation Code
NASA Technical Reports Server (NTRS)
Donastorg, Cristina
2012-01-01
NASA JPL has been creating a code in FORTRAN called KINETICS to model the chemistry of planetary atmospheres. Recently there has been an effort to introduce Message Passing Interface (MPI) into the code so as to cut down the run time of the program. There has been some implementation of MPI into KINETICS; however, the code could still be more efficient than it currently is. One way to increase efficiency is to send only certain variables to all the processes when an MPI subroutine is called and to gather only certain variables when the subroutine is finished. Therefore, all the variables that are used in three of the main subroutines needed to be investigated. Because of the sheer amount of code that there is to comb through this task was given as a ten-week project. I have been able to create flowcharts outlining the subroutines, common blocks, and functions used within the three main subroutines. From these flowcharts I created tables outlining the variables used in each block and important information about each. All this information will be used to determine how to run MPI in KINETICS in the most efficient way possible.
Group Complementary Codes With Optimized Aperiodic Correlation.
1983-04-01
efforts have addressed this problem in the past, and several waveform designs have resulted in the potential reduction or elimination of the range ... sidelobe problem. For example, Barker codes (also known as perfect binary words) limit the range sidelobes to a value of 1/N, expressed in the
Optimal periodic binary codes of lengths 28 to 64
NASA Technical Reports Server (NTRS)
Tyler, S.; Keston, R.
1980-01-01
Results from computer searches performed to find repeated binary phase coded waveforms with optimal periodic autocorrelation functions are discussed. The best results for lengths 28 to 64 are given. The code features of major concern are where (1) the peak sidelobe in the autocorrelation function is small and (2) the sum of the squares of the sidelobes in the autocorrelation function is small.
Optimizing Nuclear Physics Codes on the XT5
Hartman-Baker, Rebecca J; Nam, Hai Ah
2011-01-01
Scientists studying the structure and behavior of the atomic nucleus require immense high-performance computing resources to gain scientific insights. Several nuclear physics codes are capable of scaling to more than 100,000 cores on Oak Ridge National Laboratory's petaflop Cray XT5 system, Jaguar. In this paper, we present our work on optimizing codes in the nuclear physics domain.
The effect of code expanding optimizations on instruction cache design
NASA Technical Reports Server (NTRS)
Chen, William Y.; Chang, Pohua P.; Conte, Thomas M.; Hwu, Wen-Mei W.
1991-01-01
It is shown that code expanding optimizations have strong and non-intuitive implications on instruction cache design. Three types of code expanding optimizations are studied: instruction placement, function inline expansion, and superscalar optimizations. Overall, instruction placement reduces the miss ratio of small caches. Function inline expansion improves the performance for small cache sizes, but degrades the performance of medium caches. Superscalar optimizations increases the cache size required for a given miss ratio. On the other hand, they also increase the sequentiality of instruction access so that a simple load-forward scheme effectively cancels the negative effects. Overall, it is shown that with load forwarding, the three types of code expanding optimizations jointly improve the performance of small caches and have little effect on large caches.
Optimal Subband Coding of Cyclostationary Signals
2007-11-02
framework, making the underlying task much simpler. • A common occurrence of cyclostationarity is in Orthogonal Frequency Division Mul- tiplexed ( OFDM ...communications. We have shown that certain channel resource allocation problems for OFDM systems are dual problems of subband coding. We have solved the...optimum resource allocation problem for OFDM in the multiuser en- vironment. Specifically, we have considered in turn a variety of settings culminating
Subband Image Coding with Jointly Optimized Quantizers
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Chung, Wilson C.; Smith Mark J. T.
1995-01-01
An iterative design algorithm for the joint design of complexity- and entropy-constrained subband quantizers and associated entropy coders is proposed. Unlike conventional subband design algorithms, the proposed algorithm does not require the use of various bit allocation algorithms. Multistage residual quantizers are employed here because they provide greater control of the complexity-performance tradeoffs, and also because they allow efficient and effective high-order statistical modeling. The resulting subband coder exploits statistical dependencies within subbands, across subbands, and across stages, mainly through complexity-constrained high-order entropy coding. Experimental results demonstrate that the complexity-rate-distortion performance of the new subband coder is exceptional.
Optimal Grouping and Matching for Network-Coded Cooperative Communications
Sharma, S; Shi, Y; Hou, Y T; Kompella, S; Midkiff, S F
2011-11-01
Network-coded cooperative communications (NC-CC) is a new advance in wireless networking that exploits network coding (NC) to improve the performance of cooperative communications (CC). However, there remains very limited understanding of this new hybrid technology, particularly at the link layer and above. This paper fills in this gap by studying a network optimization problem that requires joint optimization of session grouping, relay node grouping, and matching of session/relay groups. After showing that this problem is NP-hard, we present a polynomial time heuristic algorithm to this problem. Using simulation results, we show that our algorithm is highly competitive and can produce near-optimal results.
Guler, Seyhmus; Dannhauer, Moritz; Erem, Burak; Macleod, Rob; Tucker, Don; Turovets, Sergei; Luu, Phan; Erdogmus, Deniz; Brooks, Dana H.
2016-01-01
Objective Transcranial direct current stimulation (tDCS) aims to alter brain function noninvasively via electrodes placed on the scalp. Conventional tDCS uses two relatively large patch electrodes to deliver electrical currents to the brain region of interest (ROI). Recent studies have shown that using dense arrays containing up to 512 smaller electrodes may increase the precision of targeting ROIs. However, this creates a need for methods to determine effective and safe stimulus patterns as the degrees of freedom is much higher with such arrays. Several approaches to this problem have appeared in the literature. In this paper, we describe a new method for calculating optimal electrode stimulus pattern for targeted and directional modulation in dense array tDCS which differs in some important aspects with methods reported to date. Approach We optimize stimulus pattern of dense arrays with fixed electrode placement to maximize the current density in a particular direction in the ROI. We impose a flexible set of safety constraints on the current power in the brain, individual electrode currents, and total injected current, to protect subject safety. The proposed optimization problem is convex and thus efficiently solved using existing optimization software to find unique and globally optimal electrode stimulus patterns. Main results Solutions for four anatomical ROIs based on a realistic head model are shown as exemplary results. To illustrate the differences between our approach and previously introduced methods, we compare our method with two of the other leading methods in the literature. We also report on extensive simulations that show the effect of the values chosen for each proposed safety constraint bound on the optimized stimulus patterns. Significance The proposed optimization approach employs volume based ROIs, easily adapts to different sets of safety constraints, and takes negligible time to compute. In-depth comparison study gives insight into the
NASA Astrophysics Data System (ADS)
Guler, Seyhmus; Dannhauer, Moritz; Erem, Burak; Macleod, Rob; Tucker, Don; Turovets, Sergei; Luu, Phan; Erdogmus, Deniz; Brooks, Dana H.
2016-06-01
Objective. Transcranial direct current stimulation (tDCS) aims to alter brain function non-invasively via electrodes placed on the scalp. Conventional tDCS uses two relatively large patch electrodes to deliver electrical current to the brain region of interest (ROI). Recent studies have shown that using dense arrays containing up to 512 smaller electrodes may increase the precision of targeting ROIs. However, this creates a need for methods to determine effective and safe stimulus patterns as the number of degrees of freedom is much higher with such arrays. Several approaches to this problem have appeared in the literature. In this paper, we describe a new method for calculating optimal electrode stimulus patterns for targeted and directional modulation in dense array tDCS which differs in some important aspects with methods reported to date. Approach. We optimize stimulus pattern of dense arrays with fixed electrode placement to maximize the current density in a particular direction in the ROI. We impose a flexible set of safety constraints on the current power in the brain, individual electrode currents, and total injected current, to protect subject safety. The proposed optimization problem is convex and thus efficiently solved using existing optimization software to find unique and globally optimal electrode stimulus patterns. Main results. Solutions for four anatomical ROIs based on a realistic head model are shown as exemplary results. To illustrate the differences between our approach and previously introduced methods, we compare our method with two of the other leading methods in the literature. We also report on extensive simulations that show the effect of the values chosen for each proposed safety constraint bound on the optimized stimulus patterns. Significance. The proposed optimization approach employs volume based ROIs, easily adapts to different sets of safety constraints, and takes negligible time to compute. An in-depth comparison study gives
State injection, lattice surgery, and dense packing of the deformation-based surface code
NASA Astrophysics Data System (ADS)
Nagayama, Shota; Satoh, Takahiko; Van Meter, Rodney
2017-01-01
Resource consumption of the conventional surface code is expensive, in part due to the need to separate the defects that create the logical qubit far apart on the physical qubit lattice. We propose that instantiating the deformation-based surface code using superstabilizers will make it possible to detect short error chains connecting the superstabilizers, allowing us to place logical qubits close together. Additionally, we demonstrate the process of conversion from the defect-based surface code, which works as arbitrary state injection, and a lattice-surgery-like controlled not (cnot) gate implementation that requires fewer physical qubits than the braiding cnot gate. Finally, we propose a placement design for the deformation-based surface code and analyze its resource consumption; large-scale quantum computation requires 25/d2+170 d +289 4 physical qubits per logical qubit, where d is the code distance of the standard surface code, whereas the planar code requires 16 d2-16 d +4 physical qubits per logical qubit, for a reduction of about 50%.
A Systematic Method of Interconnection Optimization for Dense-Array Concentrator Photovoltaic System
Siaw, Fei-Lu
2013-01-01
This paper presents a new systematic approach to analyze all possible array configurations in order to determine the most optimal dense-array configuration for concentrator photovoltaic (CPV) systems. The proposed method is fast, simple, reasonably accurate, and very useful as a preliminary study before constructing a dense-array CPV panel. Using measured flux distribution data, each CPV cells' voltage and current values at three critical points which are at short-circuit, open-circuit, and maximum power point are determined. From there, an algorithm groups the cells into basic modules. The next step is I-V curve prediction, to find the maximum output power of each array configuration. As a case study, twenty different I-V predictions are made for a prototype of nonimaging planar concentrator, and the array configuration that yields the highest output power is determined. The result is then verified by assembling and testing of an actual dense-array on the prototype. It was found that the I-V curve closely resembles simulated I-V prediction, and measured maximum output power varies by only 1.34%. PMID:24453823
Siaw, Fei-Lu; Chong, Kok-Keong
2013-01-01
This paper presents a new systematic approach to analyze all possible array configurations in order to determine the most optimal dense-array configuration for concentrator photovoltaic (CPV) systems. The proposed method is fast, simple, reasonably accurate, and very useful as a preliminary study before constructing a dense-array CPV panel. Using measured flux distribution data, each CPV cells' voltage and current values at three critical points which are at short-circuit, open-circuit, and maximum power point are determined. From there, an algorithm groups the cells into basic modules. The next step is I-V curve prediction, to find the maximum output power of each array configuration. As a case study, twenty different I-V predictions are made for a prototype of nonimaging planar concentrator, and the array configuration that yields the highest output power is determined. The result is then verified by assembling and testing of an actual dense-array on the prototype. It was found that the I-V curve closely resembles simulated I-V prediction, and measured maximum output power varies by only 1.34%.
Multiview coding mode decision with hybrid optimal stopping model.
Zhao, Tiesong; Kwong, Sam; Wang, Hanli; Wang, Zhou; Pan, Zhaoqing; Kuo, C-C Jay
2013-04-01
In a generic decision process, optimal stopping theory aims to achieve a good tradeoff between decision performance and time consumed, with the advantages of theoretical decision-making and predictable decision performance. In this paper, optimal stopping theory is employed to develop an effective hybrid model for the mode decision problem, which aims to theoretically achieve a good tradeoff between the two interrelated measurements in mode decision, as computational complexity reduction and rate-distortion degradation. The proposed hybrid model is implemented and examined with a multiview encoder. To support the model and further promote coding performance, the multiview coding mode characteristics, including predicted mode probability and estimated coding time, are jointly investigated with inter-view correlations. Exhaustive experimental results with a wide range of video resolutions reveal the efficiency and robustness of our method, with high decision accuracy, negligible computational overhead, and almost intact rate-distortion performance compared to the original encoder.
Optimal and efficient decoding of concatenated quantum block codes
Poulin, David
2006-11-15
We consider the problem of optimally decoding a quantum error correction code--that is, to find the optimal recovery procedure given the outcomes of partial ''check'' measurements on the system. In general, this problem is NP hard. However, we demonstrate that for concatenated block codes, the optimal decoding can be efficiently computed using a message-passing algorithm. We compare the performance of the message-passing algorithm to that of the widespread blockwise hard decoding technique. Our Monte Carlo results using the five-qubit and Steane's code on a depolarizing channel demonstrate significant advantages of the message-passing algorithms in two respects: (i) Optimal decoding increases by as much as 94% the error threshold below which the error correction procedure can be used to reliably send information over a noisy channel; and (ii) for noise levels below these thresholds, the probability of error after optimal decoding is suppressed at a significantly higher rate, leading to a substantial reduction of the error correction overhead.
Wong, Chee-Woon; Chong, Kok-Keong; Tan, Ming-Hui
2015-07-27
This paper presents an approach to optimize the electrical performance of dense-array concentrator photovoltaic system comprised of non-imaging dish concentrator by considering the circumsolar radiation and slope error effects. Based on the simulated flux distribution, a systematic methodology to optimize the layout configuration of solar cells interconnection circuit in dense array concentrator photovoltaic module has been proposed by minimizing the current mismatch caused by non-uniformity of concentrated sunlight. An optimized layout of interconnection solar cells circuit with minimum electrical power loss of 6.5% can be achieved by minimizing the effects of both circumsolar radiation and slope error.
Optimal block cosine transform image coding for noisy channels
NASA Technical Reports Server (NTRS)
Vaishampayan, V.; Farvardin, N.
1986-01-01
The two dimensional block transform coding scheme based on the discrete cosine transform was studied extensively for image coding applications. While this scheme has proven to be efficient in the absence of channel errors, its performance degrades rapidly over noisy channels. A method is presented for the joint source channel coding optimization of a scheme based on the 2-D block cosine transform when the output of the encoder is to be transmitted via a memoryless design of the quantizers used for encoding the transform coefficients. This algorithm produces a set of locally optimum quantizers and the corresponding binary code assignment for the assumed transform coefficient statistics. To determine the optimum bit assignment among the transform coefficients, an algorithm was used based on the steepest descent method, which under certain convexity conditions on the performance of the channel optimized quantizers, yields the optimal bit allocation. Comprehensive simulation results for the performance of this locally optimum system over noisy channels were obtained and appropriate comparisons against a reference system designed for no channel error were rendered.
Highly optimized tolerance and power laws in dense and sparse resource regimes.
Manning, M; Carlson, J M; Doyle, J
2005-07-01
Power law cumulative frequency (P) versus event size (l) distributions P > or =l) approximately l(-alpha) are frequently cited as evidence for complexity and serve as a starting point for linking theoretical models and mechanisms with observed data. Systems exhibiting this behavior present fundamental mathematical challenges in probability and statistics. The broad span of length and time scales associated with heavy tailed processes often require special sensitivity to distinctions between discrete and continuous phenomena. A discrete highly optimized tolerance (HOT) model, referred to as the probability, loss, resource (PLR) model, gives the exponent alpha=1/d as a function of the dimension d of the underlying substrate in the sparse resource regime. This agrees well with data for wildfires, web file sizes, and electric power outages. However, another HOT model, based on a continuous (dense) distribution of resources, predicts alpha=1+1/d . In this paper we describe and analyze a third model, the cuts model, which exhibits both behaviors but in different regimes. We use the cuts model to show all three models agree in the dense resource limit. In the sparse resource regime, the continuum model breaks down, but in this case, the cuts and PLR models are described by the same exponent.
PEB bake optimization for process window improvement of mixed iso-dense pattern
NASA Astrophysics Data System (ADS)
Liau, C. Y.; Lee, C. H.; Kang, J. T.; Yoon, S. W.; Loo, Christopher; Seow, Bertrand; Sheu, W. B.
2005-08-01
We have shown that process effects induced by extending the post-exposure bake temperature in the process flow of chemically amplified photoresists can lead to significant improvements in depth-of-focus (DOF) and exposure latitude (EL) and small geometry printing capability. Due to improved acid dose contrasts and a balanced optimization of acid diffusion in the presence of quencher, PEB temperature increase has enabled the printing of iso and semi-dense space of 0.2 µm and below with a large DOF, using binary masks and 248 nm lithography without expensing the iso-dense bias. The results and findings of a full patterning process in a device flow, with different PEB temperatures as a process enhancement, are presented. The main objective of this study is to demonstrate how, using KrF lithography with binary masks and no optical proximity correction (OPC) nor other reticle enhancement technique (RET), the process latitude can be improved. Lithographic process latitudes, intra-field critical dimension (CD) uniformity and resist profiles of different PEB processes are evaluated. Then, the after-etch profiles are also investigated to ensure the feasibility of this technique.
Koniges, A; Eder, E; Liu, W; Barnard, J; Friedman, A; Logan, G; Fisher, A; Masers, N; Bertozzi, A
2011-11-04
The Neutralized Drift Compression Experiment II (NDCX II) is an induction accelerator planned for initial commissioning in 2012. The final design calls for a 3 MeV, Li+ ion beam, delivered in a bunch with characteristic pulse duration of 1 ns, and transverse dimension of order 1 mm. The NDCX II will be used in studies of material in the warm dense matter (WDM) regime, and ion beam/hydrodynamic coupling experiments relevant to heavy ion based inertial fusion energy. We discuss recent efforts to adapt the 3D ALE-AMR code to model WDM experiments on NDCX II. The code, which combines Arbitrary Lagrangian Eulerian (ALE) hydrodynamics with Adaptive Mesh Refinement (AMR), has physics models that include ion deposition, radiation hydrodynamics, thermal diffusion, anisotropic material strength with material time history, and advanced models for fragmentation. Experiments at NDCX-II will explore the process of bubble and droplet formation (two-phase expansion) of superheated metal solids using ion beams. Experiments at higher temperatures will explore equation of state and heavy ion fusion beam-to-target energy coupling efficiency. Ion beams allow precise control of local beam energy deposition providing uniform volumetric heating on a timescale shorter than that of hydrodynamic expansion. The ALE-AMR code does not have any export control restrictions and is currently running at the National Energy Research Scientific Computing Center (NERSC) at LBNL and has been shown to scale well to thousands of CPUs. New surface tension models that are being implemented and applied to WDM experiments. Some of the approaches use a diffuse interface surface tension model that is based on the advective Cahn-Hilliard equations, which allows for droplet breakup in divergent velocity fields without the need for imposed perturbations. Other methods require seeding or other methods for droplet breakup. We also briefly discuss the effects of the move to exascale computing and related
NASA Astrophysics Data System (ADS)
Yang, Guanghui; Chen, Bingzhen; Liu, Youqiang; Guo, Limin; Yao, Shun; Wang, Zhiyong
2015-10-01
As the critical component of concentrating photovoltaic module, secondary concentrators can be effective in increasing the acceptance angle and incident light, as well as improving the energy uniformity of focal spots. This paper presents a design of transmission-type secondary microprism for dense array concentrating photovoltaic module. The 3-D model of this design is established by Solidworks and important parameters such as inclination angle and component height are optimized using Zemax. According to the design and simulation results, several secondary microprisms with different parameters are fabricated and tested in combination with Fresnel lens and multi-junction solar cell. The sun-simulator IV test results show that the combination has the highest output power when secondary microprism height is 5mm and top facet side length is 7mm. Compared with the case without secondary microprism, the output power can improve 11% after the employment of secondary microprisms, indicating the indispensability of secondary microprisms in concentrating photovoltaic module.
Optimization of microbial inactivation of shrimp by dense phase carbon dioxide.
Ji, Hongwu; Zhang, Liang; Liu, Shucheng; Qu, Xiaojuan; Zhang, Chaohua; Gao, Jialong
2012-05-01
Microbial inactivation of Litopenaeus vannamei by dense phase carbon dioxide (DPCD) treatment was investigated and neural network was used to optimize the process parameters of microbial inactivation. The results showed that DPCD treatment had a remarkable bactericidal effect on microorganism of shrimp. A 3×5×2 three-layer neural network model was established. According to the neural network model, the inactivation effect was enhanced with pressure, temperature and exposure time increasing and temperature was the most important factor affecting microbial inactivation of shrimp. Cooked appearance of shrimp by DPCD treatment was observed and seemed to be more positively acceptable by Chinese diet custom. Therefore, color change of shrimp by DPCD treatment could have a positive effect on quality attributes. Moderate temperature 55 °C with 15 MPa for 26 min treatment time achieved a 3.5-log reduction of total aerobic plate counts (TPC). The parameters combination might be appropriate for shrimp process by DPCD.
Source mask optimization using real-coded genetic algorithms
NASA Astrophysics Data System (ADS)
Yang, Chaoxing; Wang, Xiangzhao; Li, Sikun; Erdmann, Andreas
2013-04-01
Source mask optimization (SMO) is considered to be one of the technologies to push conventional 193nm lithography to its ultimate limits. In comparison with other SMO methods that use an inverse problem formulation, SMO based on genetic algorithm (GA) requires very little knowledge of the process, and has the advantage of flexible problem formulation. Recent publications on SMO using a GA employ a binary-coded GA. In general, the performance of a GA depends not only on the merit or fitness function, but also on the parameters, operators and their algorithmic implementation. In this paper, we propose a SMO method using real-coded GA where the source and mask solutions are represented by floating point strings instead of bit strings. Besides from that, the selection, crossover, and mutation operators are replaced by corresponding floating-point versions. Both binary-coded and real-coded genetic algorithms were implemented in two versions of SMO and compared in numerical experiments, where the target patterns are staggered contact holes and a logic pattern with critical dimensions of 100 nm, respectively. The results demonstrate the performance improvement of the real-coded GA in comparison to the binary-coded version. Specifically, these improvements can be seen in a better convergence behavior. For example, the numerical experiments for the logic pattern showed that the average number of generations to converge to a proper fitness of 6.0 using the real-coded method is 61.8% (100 generations) less than that using binary-coded method.
NASA Astrophysics Data System (ADS)
Imtiaz, Waqas A.; Ilyas, M.; Khan, Yousaf
2016-11-01
This paper propose a new code to optimize the performance of spectral amplitude coding-optical code division multiple access (SAC-OCDMA) system. The unique two-matrix structure of the proposed enhanced multi diagonal (EMD) code and effective correlation properties, between intended and interfering subscribers, significantly elevates the performance of SAC-OCDMA system by negating multiple access interference (MAI) and associated phase induce intensity noise (PIIN). Performance of SAC-OCDMA system based on the proposed code is thoroughly analyzed for two detection techniques through analytic and simulation analysis by referring to bit error rate (BER), signal to noise ratio (SNR) and eye patterns at the receiving end. It is shown that EMD code while using SDD technique provides high transmission capacity, reduces the receiver complexity, and provides better performance as compared to complementary subtraction detection (CSD) technique. Furthermore, analysis shows that, for a minimum acceptable BER of 10-9 , the proposed system supports 64 subscribers at data rates of up to 2 Gbps for both up-down link transmission.
Optimal block cosine transform image coding for noisy channels
NASA Technical Reports Server (NTRS)
Vaishampayan, Vinay A.; Farvardin, Nariman
1990-01-01
The two dimensional block transform coding scheme based on the discrete cosine transform was studied extensively for image coding applications. While this scheme has proven to be efficient in the absence of channel errors, its performance degrades rapidly over noisy channels. A method is presented for the joint source channel coding optimiaation of a scheme based on the 2-D block cosine transorm when the output of the encoder is to be transmitted via a memoryless design of the quantizers used for encoding the transform coefficients. This algorithm produces a set of locally optimum quantizers and the corresponding binary code assignment for the assumed transform coefficient statistics. To determine the optimum bit assignment among the transform coefficients, an algorithm was used based on the steepest descent method, which under certain convexity conditions on the performance of the channel optimized quantizers, yields the optimal bit allocation. Comprehensive simulation results for the performance of this locally optimum system over noise channels were obtained and appropriate comparisons against a reference system designed for no channel error were rendered.
Optimal bounds for parity-oblivious random access codes
NASA Astrophysics Data System (ADS)
Chailloux, André; Kerenidis, Iordanis; Kundu, Srijita; Sikora, Jamie
2016-04-01
Random access coding is an information task that has been extensively studied and found many applications in quantum information. In this scenario, Alice receives an n-bit string x, and wishes to encode x into a quantum state {ρ }x, such that Bob, when receiving the state {ρ }x, can choose any bit i\\in [n] and recover the input bit x i with high probability. Here we study two variants: parity-oblivious random access codes (RACs), where we impose the cryptographic property that Bob cannot infer any information about the parity of any subset of bits of the input apart from the single bits x i ; and even-parity-oblivious RACs, where Bob cannot infer any information about the parity of any even-size subset of bits of the input. In this paper, we provide the optimal bounds for parity-oblivious quantum RACs and show that they are asymptotically better than the optimal classical ones. Our results provide a large non-contextuality inequality violation and resolve the main open problem in a work of Spekkens et al (2009 Phys. Rev. Lett. 102 010401). Second, we provide the optimal bounds for even-parity-oblivious RACs by proving their equivalence to a non-local game and by providing tight bounds for the success probability of the non-local game via semidefinite programming. In the case of even-parity-oblivious RACs, the cryptographic property holds also in the device independent model.
A simple model of optimal population coding for sensory systems.
Doi, Eizaburo; Lewicki, Michael S
2014-08-01
A fundamental task of a sensory system is to infer information about the environment. It has long been suggested that an important goal of the first stage of this process is to encode the raw sensory signal efficiently by reducing its redundancy in the neural representation. Some redundancy, however, would be expected because it can provide robustness to noise inherent in the system. Encoding the raw sensory signal itself is also problematic, because it contains distortion and noise. The optimal solution would be constrained further by limited biological resources. Here, we analyze a simple theoretical model that incorporates these key aspects of sensory coding, and apply it to conditions in the retina. The model specifies the optimal way to incorporate redundancy in a population of noisy neurons, while also optimally compensating for sensory distortion and noise. Importantly, it allows an arbitrary input-to-output cell ratio between sensory units (photoreceptors) and encoding units (retinal ganglion cells), providing predictions of retinal codes at different eccentricities. Compared to earlier models based on redundancy reduction, the proposed model conveys more information about the original signal. Interestingly, redundancy reduction can be near-optimal when the number of encoding units is limited, such as in the peripheral retina. We show that there exist multiple, equally-optimal solutions whose receptive field structure and organization vary significantly. Among these, the one which maximizes the spatial locality of the computation, but not the sparsity of either synaptic weights or neural responses, is consistent with known basic properties of retinal receptive fields. The model further predicts that receptive field structure changes less with light adaptation at higher input-to-output cell ratios, such as in the periphery.
The microstructures of cold dense systems as informed by hard sphere models and optimal packings
NASA Astrophysics Data System (ADS)
Hopkins, Adam Bayne
Sphere packings, or arrangements of "billiard balls" of various sizes that never overlap, are especially informative and broadly applicable models. In particular, a hard sphere model describes the important foundational case where potential energy due to attractive and repulsive forces is not present, meaning that entropy dominates the system's free energy. Sphere packings have been widely employed in chemistry, materials science, physics and biology to model a vast range of materials including concrete, rocket fuel, proteins, liquids and solid metals, to name but a few. Despite their richness and broad applicability, many questions about fundamental sphere packings remain unanswered. For example, what are the densest packings of identical three-dimensional spheres within certain defined containers? What are the densest packings of binary spheres (spheres of two different sizes) in three-dimensional Euclidean space R3 ? The answers to these two questions are important in condensed matter physics and solid-state chemistry. The former is important to the theory of nucleation in supercooled liquids and the latter in terms of studying the structure and stability of atomic and molecular alloys. The answers to both questions are useful when studying the targeted self-assembly of colloidal nanostructures. In this dissertation, putatively optimal answers to both of these questions are provided, and the applications of these findings are discussed. The methods developed to provide these answers, novel algorithms combining sequential linear and nonlinear programming techniques with targeted stochastic searches of conguration space, are also discussed. In addition, connections between the realizability of pair correlation functions and optimal sphere packings are studied, and mathematical proofs are presented concerning the characteristics of both locally and globally maximally dense structures in arbitrary dimension d. Finally, surprising and unexpected findings are
Investigation of Navier-Stokes Code Verification and Design Optimization
NASA Technical Reports Server (NTRS)
Vaidyanathan, Rajkumar
2004-01-01
With rapid progress made in employing computational techniques for various complex Navier-Stokes fluid flow problems, design optimization problems traditionally based on empirical formulations and experiments are now being addressed with the aid of computational fluid dynamics (CFD). To be able to carry out an effective CFD-based optimization study, it is essential that the uncertainty and appropriate confidence limits of the CFD solutions be quantified over the chosen design space. The present dissertation investigates the issues related to code verification, surrogate model-based optimization and sensitivity evaluation. For Navier-Stokes (NS) CFD code verification a least square extrapolation (LSE) method is assessed. This method projects numerically computed NS solutions from multiple, coarser base grids onto a freer grid and improves solution accuracy by minimizing the residual of the discretized NS equations over the projected grid. In this dissertation, the finite volume (FV) formulation is focused on. The interplay between the xi concepts and the outcome of LSE, and the effects of solution gradients and singularities, nonlinear physics, and coupling of flow variables on the effectiveness of LSE are investigated. A CFD-based design optimization of a single element liquid rocket injector is conducted with surrogate models developed using response surface methodology (RSM) based on CFD solutions. The computational model consists of the NS equations, finite rate chemistry, and the k-6 turbulence closure. With the aid of these surrogate models, sensitivity and trade-off analyses are carried out for the injector design whose geometry (hydrogen flow angle, hydrogen and oxygen flow areas and oxygen post tip thickness) is optimized to attain desirable goals in performance (combustion length) and life/survivability (the maximum temperatures on the oxidizer post tip and injector face and a combustion chamber wall temperature). A preliminary multi-objective optimization
Recent developments in DYNSUB: New models, code optimization and parallelization
Daeubler, M.; Trost, N.; Jimenez, J.; Sanchez, V.
2013-07-01
DYNSUB is a high-fidelity coupled code system consisting of the reactor simulator DYN3D and the sub-channel code SUBCHANFLOW. It describes nuclear reactor core behavior with pin-by-pin resolution for both steady-state and transient scenarios. In the course of the coupled code system's active development, super-homogenization (SPH) and generalized equivalence theory (GET) discontinuity factors may be computed with and employed in DYNSUB to compensate pin level homogenization errors. Because of the largely increased numerical problem size for pin-by-pin simulations, DYNSUB has bene fitted from HPC techniques to improve its numerical performance. DYNSUB's coupling scheme has been structurally revised. Computational bottlenecks have been identified and parallelized for shared memory systems using OpenMP. Comparing the elapsed time for simulating a PWR core with one-eighth symmetry under hot zero power conditions applying the original and the optimized DYNSUB using 8 cores, overall speed up factors greater than 10 have been observed. The corresponding reduction in execution time enables a routine application of DYNSUB to study pin level safety parameters for engineering sized cases in a scientific environment. (authors)
Iterative Phase Optimization of Elementary Quantum Error Correcting Codes
NASA Astrophysics Data System (ADS)
Müller, M.; Rivas, A.; Martínez, E. A.; Nigg, D.; Schindler, P.; Monz, T.; Blatt, R.; Martin-Delgado, M. A.
2016-07-01
Performing experiments on small-scale quantum computers is certainly a challenging endeavor. Many parameters need to be optimized to achieve high-fidelity operations. This can be done efficiently for operations acting on single qubits, as errors can be fully characterized. For multiqubit operations, though, this is no longer the case, as in the most general case, analyzing the effect of the operation on the system requires a full state tomography for which resources scale exponentially with the system size. Furthermore, in recent experiments, additional electronic levels beyond the two-level system encoding the qubit have been used to enhance the capabilities of quantum-information processors, which additionally increases the number of parameters that need to be controlled. For the optimization of the experimental system for a given task (e.g., a quantum algorithm), one has to find a satisfactory error model and also efficient observables to estimate the parameters of the model. In this manuscript, we demonstrate a method to optimize the encoding procedure for a small quantum error correction code in the presence of unknown but constant phase shifts. The method, which we implement here on a small-scale linear ion-trap quantum computer, is readily applicable to other AMO platforms for quantum-information processing.
Iterative optimal subcritical aerodynamic design code including profile drag
NASA Technical Reports Server (NTRS)
Kuhlman, J. M.
1983-01-01
A subcritical aerodynamic design computer code has been developed, which uses linearized aerodynamics along with sweep theory and airfoil data to obtain minimum total drag preliminary designs for multiple planform configurations. These optimum designs consist of incidence distributions yielding minimum total drag at design values of Mach number and lift and pitching moment coefficients. Linear lofting is used between airfoil stations. Solutions for isolated transport wings have shown that the solution is unique, and that including profile drag effects decreases tip loading and incidence relative to values obtained for minimum induced drag solutions. Further, including effects of variation of profile drag with Reynolds number can cause appreciable changes in the optimal design for tapered wings. Example solutions are also discussed for multiple planform configurations.
The optimation of random network coding in wireless MESH networks
NASA Astrophysics Data System (ADS)
Pang, Chunjiang; Pan, Xikun
2013-03-01
In order to improve the efficiency of wireless mesh network transmission, this paper focused on the network coding technology. Using network coding can significantly increase the wireless mesh network's throughput, but it will inevitably increase the computational complexity to the network, and the traditional linear network coding algorithm requires the aware of the whole network topology, which is impossible in the ever-changing topology of wireless mesh networks. In this paper, we use a distributed network coding strategy: random network coding, which don't need to know the whole topology of the network. In order to decrease the computation complexity, this paper suggests an improved strategy for random network coding: Do not code the packets which bring no good to the whole transmission. In this paper, we list several situations which coding is not necessary. Simulation results show that applying these strategies can improve the efficiency of wireless mesh network transmission.
Optimal spike-based communication in excitable networks with strong-sparse and weak-dense links.
Teramae, Jun-nosuke; Tsubo, Yasuhiro; Fukai, Tomoki
2012-01-01
The connectivity of complex networks and functional implications has been attracting much interest in many physical, biological and social systems. However, the significance of the weight distributions of network links remains largely unknown except for uniformly- or Gaussian-weighted links. Here, we show analytically and numerically, that recurrent neural networks can robustly generate internal noise optimal for spike transmission between neurons with the help of a long-tailed distribution in the weights of recurrent connections. The structure of spontaneous activity in such networks involves weak-dense connections that redistribute excitatory activity over the network as noise sources to optimally enhance the responses of individual neurons to input at sparse-strong connections, thus opening multiple signal transmission pathways. Electrophysiological experiments confirm the importance of a highly broad connectivity spectrum supported by the model. Our results identify a simple network mechanism for internal noise generation by highly inhomogeneous connection strengths supporting both stability and optimal communication.
Optimal spike-based communication in excitable networks with strong-sparse and weak-dense links
NASA Astrophysics Data System (ADS)
Teramae, Jun-Nosuke; Tsubo, Yasuhiro; Fukai, Tomoki
2012-07-01
The connectivity of complex networks and functional implications has been attracting much interest in many physical, biological and social systems. However, the significance of the weight distributions of network links remains largely unknown except for uniformly- or Gaussian-weighted links. Here, we show analytically and numerically, that recurrent neural networks can robustly generate internal noise optimal for spike transmission between neurons with the help of a long-tailed distribution in the weights of recurrent connections. The structure of spontaneous activity in such networks involves weak-dense connections that redistribute excitatory activity over the network as noise sources to optimally enhance the responses of individual neurons to input at sparse-strong connections, thus opening multiple signal transmission pathways. Electrophysiological experiments confirm the importance of a highly broad connectivity spectrum supported by the model. Our results identify a simple network mechanism for internal noise generation by highly inhomogeneous connection strengths supporting both stability and optimal communication.
Image-Guided Non-Local Dense Matching with Three-Steps Optimization
NASA Astrophysics Data System (ADS)
Huang, Xu; Zhang, Yongjun; Yue, Zhaoxi
2016-06-01
This paper introduces a new image-guided non-local dense matching algorithm that focuses on how to solve the following problems: 1) mitigating the influence of vertical parallax to the cost computation in stereo pairs; 2) guaranteeing the performance of dense matching in homogeneous intensity regions with significant disparity changes; 3) limiting the inaccurate cost propagated from depth discontinuity regions; 4) guaranteeing that the path between two pixels in the same region is connected; and 5) defining the cost propagation function between the reliable pixel and the unreliable pixel during disparity interpolation. This paper combines the Census histogram and an improved histogram of oriented gradient (HOG) feature together as the cost metrics, which are then aggregated based on a new iterative non-local matching method and the semi-global matching method. Finally, new rules of cost propagation between the valid pixels and the invalid pixels are defined to improve the disparity interpolation results. The results of our experiments using the benchmarks and the Toronto aerial images from the International Society for Photogrammetry and Remote Sensing (ISPRS) show that the proposed new method can outperform most of the current state-of-the-art stereo dense matching methods.
Hydrodynamic Optimization Method and Design Code for Stall-Regulated Hydrokinetic Turbine Rotors
Sale, D.; Jonkman, J.; Musial, W.
2009-08-01
This report describes the adaptation of a wind turbine performance code for use in the development of a general use design code and optimization method for stall-regulated horizontal-axis hydrokinetic turbine rotors. This rotor optimization code couples a modern genetic algorithm and blade-element momentum performance code in a user-friendly graphical user interface (GUI) that allows for rapid and intuitive design of optimal stall-regulated rotors. This optimization method calculates the optimal chord, twist, and hydrofoil distributions which maximize the hydrodynamic efficiency and ensure that the rotor produces an ideal power curve and avoids cavitation. Optimizing a rotor for maximum efficiency does not necessarily create a turbine with the lowest cost of energy, but maximizing the efficiency is an excellent criterion to use as a first pass in the design process. To test the capabilities of this optimization method, two conceptual rotors were designed which successfully met the design objectives.
Optimal Near-Hitless Network Failure Recovery Using Diversity Coding
ERIC Educational Resources Information Center
Avci, Serhat Nazim
2013-01-01
Link failures in wide area networks are common and cause significant data losses. Mesh-based protection schemes offer high capacity efficiency but they are slow, require complex signaling, and instable. Diversity coding is a proactive coding-based recovery technique which offers near-hitless (sub-ms) restoration with a competitive spare capacity…
Efficacy of Code Optimization on Cache-based Processors
NASA Technical Reports Server (NTRS)
VanderWijngaart, Rob F.; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
The current common wisdom in the U.S. is that the powerful, cost-effective supercomputers of tomorrow will be based on commodity (RISC) micro-processors with cache memories. Already, most distributed systems in the world use such hardware as building blocks. This shift away from vector supercomputers and towards cache-based systems has brought about a change in programming paradigm, even when ignoring issues of parallelism. Vector machines require inner-loop independence and regular, non-pathological memory strides (usually this means: non-power-of-two strides) to allow efficient vectorization of array operations. Cache-based systems require spatial and temporal locality of data, so that data once read from main memory and stored in high-speed cache memory is used optimally before being written back to main memory. This means that the most cache-friendly array operations are those that feature zero or unit stride, so that each unit of data read from main memory (a cache line) contains information for the next iteration in the loop. Moreover, loops ought to be 'fat', meaning that as many operations as possible are performed on cache data-provided instruction caches do not overflow and enough registers are available. If unit stride is not possible, for example because of some data dependency, then care must be taken to avoid pathological strides, just ads on vector computers. For cache-based systems the issues are more complex, due to the effects of associativity and of non-unit block (cache line) size. But there is more to the story. Most modern micro-processors are superscalar, which means that they can issue several (arithmetic) instructions per clock cycle, provided that there are enough independent instructions in the loop body. This is another argument for providing fat loop bodies. With these restrictions, it appears fairly straightforward to produce code that will run efficiently on any cache-based system. It can be argued that although some of the important
Parameter optimization capability in the trajectory code PMAST (Point-Mass Simulation Tool)
Outka, D.E.
1987-01-28
Trajectory optimization capability has been added to PMAST through addition of the Recursive Quadratic Programming code VF02AD. The scope of trajectory optimization problems the resulting code can solve is very broad, as it takes advantage of the versatility of the original PMAST code. Most three-degree-of-freedom flight-vehicle problems can be simulated with PMAST, and up to 25 parameters specifying initial conditions, weights, control histories and other problem-deck inputs can be used to meet trajectory constraints in some optimal manner. This report outlines the mathematical formulation of the optimization technique, describes the input requirements and suggests guidelines for problem formulation. An example problem is presented to demonstrate the use and features of the optimization portions of the code.
GPU Optimizations for a Production Molecular Docking Code.
Landaverde, Raphael; Herbordt, Martin C
2014-09-01
Modeling molecular docking is critical to both understanding life processes and designing new drugs. In previous work we created the first published GPU-accelerated docking code (PIPER) which achieved a roughly 5× speed-up over a contemporaneous 4 core CPU. Advances in GPU architecture and in the CPU code, however, have since reduced this relalative performance by a factor of 10. In this paper we describe the upgrade of GPU PIPER. This required an entire rewrite, including algorithm changes and moving most remaining non-accelerated CPU code onto the GPU. The result is a 7× improvement in GPU performance and a 3.3× speedup over the CPU-only code. We find that this difference in time is almost entirely due to the difference in run times of the 3D FFT library functions on CPU (MKL) and GPU (cuFFT), respectively. The GPU code has been integrated into the ClusPro docking server which has over 4000 active users.
Design of zero reference codes by means of a global optimization method
NASA Astrophysics Data System (ADS)
Saez Landete, José; Alonso, José; Bernabeu, Eusebio
2005-01-01
The grating measurement systems can be used for displacement and angle measurements. They require of zero reference codes to obtain zero reference signals and absolute measures. The zero reference signals are obtained from the autocorrelation of two identical zero reference codes. The design of codes which generate optimum signals is rather complex, especially for larges codes. In this paper we present a global optimization method, a DIRECT algorithm for the design of zero reference codes. This method proves to be a powerful tool for solving this inverse problem.
Design of zero reference codes by means of a global optimization method.
Saez-Landete, José; Alonso, José; Bernabeu, Eusebio
2005-01-10
The grating measurement systems can be used for displacement and angle measurements. They require of zero reference codes to obtain zero reference signals and absolute measures. The zero reference signals are obtained from the autocorrelation of two identical zero reference codes. The design of codes which generate optimum signals is rather complex, especially for larges codes. In this paper we present a global optimization method, a DIRECT algorithm for the design of zero reference codes. This method proves to be a powerful tool for solving this inverse problem.
Quantum circuit for optimal eavesdropping in quantum key distribution using phase-time coding
Kronberg, D. A.; Molotkov, S. N.
2010-07-15
A quantum circuit is constructed for optimal eavesdropping on quantum key distribution proto- cols using phase-time coding, and its physical implementation based on linear and nonlinear fiber-optic components is proposed.
Beck, Dominik; Thoms, Julie A I; Perera, Dilmi; Schütte, Judith; Unnikrishnan, Ashwin; Knezevic, Kathy; Kinston, Sarah J; Wilson, Nicola K; O'Brien, Tracey A; Göttgens, Berthold; Wong, Jason W H; Pimanda, John E
2013-10-03
Genome-wide combinatorial binding patterns for key transcription factors (TFs) have not been reported for primary human hematopoietic stem and progenitor cells (HSPCs), and have constrained analysis of the global architecture of molecular circuits controlling these cells. Here we provide high-resolution genome-wide binding maps for a heptad of key TFs (FLI1, ERG, GATA2, RUNX1, SCL, LYL1, and LMO2) in human CD34(+) HSPCs, together with quantitative RNA and microRNA expression profiles. We catalog binding of TFs at coding genes and microRNA promoters, and report that combinatorial binding of all 7 TFs is favored and associated with differential expression of genes and microRNA in HSPCs. We also uncover a previously unrecognized association between FLI1 and RUNX1 pairing in HSPCs, we establish a correlation between the density of histone modifications that mark active enhancers and the number of overlapping TFs at a peak, we demonstrate bivalent histone marks at promoters of heptad target genes in CD34(+) cells that are poised for later expression, and we identify complex relationships between specific microRNAs and coding genes regulated by the heptad. Taken together, these data reveal the power of integrating multifactor sequencing of chromatin immunoprecipitates with coding and noncoding gene expression to identify regulatory circuits controlling cell identity.
Wing design code using three-dimensional Euler equations and optimization
NASA Technical Reports Server (NTRS)
Chang, I-Chung; Torres, Francisco J.; Van Dam, C. P.
1991-01-01
This paper describes a new wing design code which is based on the Euler equations and a constrained numerical optimization technique. The geometry modification is based on a set of fundamental modes define on the unit interval. A design example involving a high speed civil transport wing is presented to demonstrate the usefulness of the design code. It is shown that the use of an Euler solver in the direct numerical optimization procedures is affordable on the current generation of supercomputers.
Optimization of Ambient Noise Cross-Correlation Imaging Across Large Dense Array
NASA Astrophysics Data System (ADS)
Sufri, O.; Xie, Y.; Lin, F. C.; Song, W.
2015-12-01
Ambient Noise Tomography is currently one of the most studied topics of seismology. It gives possibility of studying physical properties of rocks from the depths of subsurface to the upper mantle depths using recorded noise sources. A network of new seismic sensors, which are capable of recording continuous seismic noise and doing the processing at the same time on-site, could help to assess possible risk of volcanic activity on a volcano and help to understand the changes in physical properties of a fault before and after an earthquake occurs. This new seismic sensor technology could also be used in oil and gas industry to figure out depletion rate of a reservoir and help to improve velocity models for obtaining better seismic reflection cross-sections. Our recent NSF funded project is bringing seismologists, signal processors, and computer scientists together to develop a new ambient noise seismic imaging system which could record continuous seismic noise and process it on-site and send Green's functions and/or tomography images to the network. Such an imaging system requires optimum amount of sensors, sensor communication, and processing of the recorded data. In order to solve these problems, we first started working on the problem of optimum amount of sensors and the communication between these sensors by using small aperture dense network called Sweetwater Array, deployed by Nodal Seismic in 2014. We downloaded ~17 day of continuous data from 2268 one-component stations between March 30-April 16 2015 from IRIS DMC and performed cross-correlation to determine the lag times between station pairs. The lag times were then entered in matrix form. Our goal is to selecting random lag time values in the matrix and assuming all other elements of the matrix either missing or unknown and performing matrix completion technique to find out how close the results from matrix completion technique would be close to the real calculated values. This would give us better idea
Aircraft Course Optimization Tool Using GPOPS MATLAB Code
2012-03-01
preceding paragraph and in reality relies heavily on the pseduospectral portion of GPOPS’ name. More specifically GPOPS uses the Radau Pseudospectral...Software for Solving Multiple-Phase Optimal Control Problems Using hp-Adaptive Pseu- dospectral Methods,” 2011. 9. Gill, P . E., Murray, W., and Saunders, M
Study of dense helium plasma in the optimal hypernetted chain approximation
Mueller, H.; Langanke, K. )
1994-01-01
We have studied the helium plasma in the hypernetted chain approximation considering both short-ranged internuclear and long-ranged Coulomb interactions. The optimal two-particle wave function has been determined in fourth order; fifth-order corrections have been considered in the calculation of the two-body and three-body correlation functions. The latter has been used to determine the pycnonuclear triple-alpha-fusion rate in the density regime 10[sup 8] g/cm[sup 3][le][rho][le]10[sup 10] g/cm[sup 3], which is of importance for the crust evolution of an accreting old neutron star. The influence of three-particle terms in the many-body wave function on the rate is estimated within an additional variational hypernetted chain calculation. Our results support the idea that the helium liquid undergoes a phase transition to stable [sup 8]Be matter at densities [rho][approx]3[times]10[sup 9] g/cm[sup 3] as the plasma induced screening potential then becomes strong enough to bind the [sup 8]Be ground state.
Adaptive λ estimation in Lagrangian rate-distortion optimization for video coding
NASA Astrophysics Data System (ADS)
Chen, Lulin; Garbacea, Ilie
2006-01-01
In this paper, adaptive Lagrangian multiplier λ estimation in Larangian R-D optimization for video coding is presented that is based on the ρ-domain linear rate model and distortion model. It yields that λ is a function of rate, distortion and coding input statistics and can be written as λ(R, D, σ2) = β(ln(σ2/D) + δ)D/R + k 0, with β, δ and k 0 as coding constants, σ2 is variance of prediction error input. λ(R, D, σ2) describes its ubiquitous relationship with coding statistics and coding input in hybrid video coding such as H.263, MPEG-2/4 and H.264/AVC. The lambda evaluation is de-coupled with quantization parameters. The proposed lambda estimation enables a fine encoder design and encoder control.
Optimal coding of vectorcardiographic sequences using spatial prediction.
Augustyniak, Piotr
2007-05-01
This paper discusses principles, implementation details, and advantages of sequence coding algorithm applied to the compression of vectocardiograms (VCG). The main novelty of the proposed method is the automatic management of distortion distribution controlled by the local signal contents in both technical and medical aspects. As in clinical practice, the VCG loops representing P, QRS, and T waves in the three-dimensional (3-D) space are considered here as three simultaneous sequences of objects. Because of the similarity of neighboring loops, encoding the values of prediction error significantly reduces the data set volume. The residual values are de-correlated with the discrete cosine transform (DCT) and truncated at certain energy threshold. The presented method is based on the irregular temporal distribution of medical data in the signal and takes advantage of variable sampling frequency for automatically detected VCG loops. The features of the proposed algorithm are confirmed by the results of the numerical experiment carried out for a wide range of real records. The average data reduction ratio reaches a value of 8.15 while the percent root-mean-square difference (PRD) distortion ratio for the most important sections of signal does not exceed 1.1%.
Efficacy of Code Optimization on Cache-Based Processors
NASA Technical Reports Server (NTRS)
VanderWijngaart, Rob F.; Saphir, William C.; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
In this paper a number of techniques for improving the cache performance of a representative piece of numerical software is presented. Target machines are popular processors from several vendors: MIPS R5000 (SGI Indy), MIPS R8000 (SGI PowerChallenge), MIPS R10000 (SGI Origin), DEC Alpha EV4 + EV5 (Cray T3D & T3E), IBM RS6000 (SP Wide-node), Intel PentiumPro (Ames' Whitney), Sun UltraSparc (NERSC's NOW). The optimizations all attempt to increase the locality of memory accesses. But they meet with rather varied and often counterintuitive success on the different computing platforms. We conclude that it may be genuinely impossible to obtain portable performance on the current generation of cache-based machines. At the least, it appears that the performance of modern commodity processors cannot be described with parameters defining the cache alone.
On the optimality of code options for a universal noiseless coder
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu; Rice, Robert F.; Miller, Warner
1991-01-01
A universal noiseless coding structure was developed that provides efficient performance over an extremely broad range of source entropy. This is accomplished by adaptively selecting the best of several easily implemented variable length coding algorithms. Custom VLSI coder and decoder modules capable of processing over 20 million samples per second are currently under development. The first of the code options used in this module development is shown to be equivalent to a class of Huffman code under the Humblet condition, other options are shown to be equivalent to the Huffman codes of a modified Laplacian symbol set, at specified symbol entropy values. Simulation results are obtained on actual aerial imagery, and they confirm the optimality of the scheme. On sources having Gaussian or Poisson distributions, coder performance is also projected through analysis and simulation.
Power optimization of wireless media systems with space-time block codes.
Yousefi'zadeh, Homayoun; Jafarkhani, Hamid; Moshfeghi, Mehran
2004-07-01
We present analytical and numerical solutions to the problem of power control in wireless media systems with multiple antennas. We formulate a set of optimization problems aimed at minimizing total power consumption of wireless media systems subject to a given level of QoS and an available bit rate. Our formulation takes into consideration the power consumption related to source coding, channel coding, and transmission of multiple-transmit antennas. In our study, we consider Gauss-Markov and video source models, Rayleigh fading channels along with the Bernoulli/Gilbert-Elliott loss models, and space-time block codes.
DOPEX-1D2C: A one-dimensional, two-constraint radiation shield optimization code
NASA Technical Reports Server (NTRS)
Lahti, G. P.
1973-01-01
A one-dimensional, two-constraint radiation sheild weight optimization procedure and a computer program, DOPEX-1D2C, is described. The DOPEX-1D2C uses the steepest descent method to alter a set of initial (input) thicknesses of a spherical shield configuration to achieve a minimum weight while simultaneously satisfying two dose-rate constraints. The code assumes an exponential dose-shield thickness relation with parameters specified by the user. Code input instruction, a FORTRAN-4 listing, and a sample problem are given. Typical computer time required to optimize a seven-layer shield is less than 1/2 minute on an IBM 7094.
NASA Astrophysics Data System (ADS)
Hetling, Kenneth J.; Saulnier, Gary J.; Das, Pankaj K.
1995-04-01
In communications systems, the message signal is sometimes spread over a large bandwidth in order to realize performance gains in the presence of narrowband interference, multipath propagation, and multiuser interference. The extent to which performance is improved is highly dependent upon the spreading code implemented. Traditionally, the spreading codes have consisted of pseudo-noise (PN) sequences whose chip values are limited to bipolar values. Recently, however, alternatives to the PN sequences have been studied including wavelet based and PR-QMF based spreading codes. The spreading codes implemented are the basis functions of a particular wavelet transform or PR-QMF bank. Since the choice of available basis functions is much larger than that of PN sequences, it is hoped that better performance can be achieved by choosing a basis tailored to the system requirements mentioned above. In this paper, a design method is presented to construct a PR-QMF bank which will generate spreading codes optimized for operating in a multiuser interference environment. Objective functions are developed for the design criteria and a multivariable constrained optimization problem is employed to generate the coefficients used in the filter bank. Once the filter bank is complete, the spreading codes are extracted and implemented in the spread spectrum system. System bit error rate (BER) curves are generated from computer simulation for analysis. Curves are generated for both the single user and the CDMA environment and performance is compared to that attained using gold codes.
DENSE MEDIA CYCLONE OPTIMIZATION
Gerald H. Luttrell
2001-09-10
The fieldwork associated with Task 1 (Baseline Assessment) was completed this quarter. Detailed cyclone inspections completed at all but one plant during maintenance shifts. Analysis of the test samples is also currently underway in Task 4 (Sample Analysis). A Draft Recommendation was prepared for the management at each test site in Task 2 (Circuit Modification). All required procurements were completed. Density tracers were manufactured and tested for quality control purposes. Special sampling tools were also purchased and/or fabricated for each plant site. The preliminary experimental data show that the partitioning performance for all seven HMC circuits was generally good. This was attributed to well-maintained cyclones and good operating practices. However, the density tracers detected that most circuits suffered from poor control of media cutpoint. These problems were attributed to poor x-ray calibration and improper manual density measurements. These conclusions will be validated after the analyses of the composite samples have been completed.
DENSE MEDIA CYCLONE OPTIMIZATION
David M. Hyman
2002-01-14
All work associated with Task 1 (Baseline Assessment) was successfully completed and preliminary corrections/recommendations were provided back to the management at each test site. Detailed float-sink tests were completed for Site No.1 and are currently underway for Sites No.2-No. 4. Unfortunately, the work associated with sample analyses (Task 4--Sample Analysis) has been delayed because of a backlog of coal samples at the commercial laboratory participating in this project. As a result, a no-cost project time extension may be necessary in order to complete the project. A decision will be made at the end of the next reporting period. Some of the work completed this quarter included (i) development of mass balance routines for data analysis, (ii) formulation of an expert system rule base, (iii) completion of statistical computations and mathematical curve fits for the density tracer test data. In addition, an ''O & M Checklist'' was prepared to provide plant operators with simple operating and maintenance guidelines that must be followed to obtain good HMC performance.
DENSE MEDIA CYCLONE OPTIMIZATION
Gerald H. Luttrell
2003-09-09
All technical project activities have been successfully completed. This effort included (1) completion of field testing using density tracers, (2) development of a spreadsheet based HMC simulation program, and (3) preparation of a menu-driven expert system for HMC trouble-shooting. The final project report is now being prepared for submission to DOE comment and review. The submission has been delayed due to difficulties in compiling the large base of technical information generated by the project. Technical personnel are now working to complete this report. Effort is being underway to finalize the financial documents necessary to demonstrate that the cost-sharing requirements for the project have been met.
DENSE MEDIA CYCLONE OPTIMIZATION
Gerald H. Luttrell
2003-01-15
All technical project activities have been successfully completed. This effort included (1) completion of field testing using density tracers, (2) development of a spreadsheet based HMC simulation program, and (3) preparation of a menu-driven expert system for HMC trouble-shooting. The final project report is now being prepared for submission to DOE comment and review. The submission has been delayed due to difficulties in compiling the large base of technical information generated by the project. Technical personnel are now working to complete this report. Effort is being underway to finalize the financial documents necessary to demonstrate that the cost-sharing requirements for the project have been met.
DENSE MEDIA CYCLONE OPTIMIZATION
Gerald H. Luttrell
2002-09-14
All project activities are now winding down. Follow-up tracer tests were conducted at several of the industrial test sites and analysis of the experimental data is currently underway. All required field work was completed during this quarter. In addition, the heavy medium cyclone simulation and expert system programs are nearly completed and user manuals are being prepared. Administrative activities (e.g., project documents, cost-sharing accounts, etc.) are being reviewed and prepared for final submission to DOE. All project reporting requirements are up to date. All financial expenditures are within approved limits.
NASA Astrophysics Data System (ADS)
Walsh, Jonathan A.; Romano, Paul K.; Forget, Benoit; Smith, Kord S.
2015-11-01
In this work we propose, implement, and test various optimizations of the typical energy grid-cross section pair lookup algorithm in Monte Carlo particle transport codes. The key feature common to all of the optimizations is a reduction in the length of the vector of energies that must be searched when locating the index of a particle's current energy. Other factors held constant, a reduction in energy vector length yields a reduction in CPU time. The computational methods we present here are physics-informed. That is, they are designed to utilize the physical information embedded in a simulation in order to reduce the length of the vector to be searched. More specifically, the optimizations take advantage of information about scattering kinematics, neutron cross section structure and data representation, and also the expected characteristics of a system's spatial flux distribution and energy spectrum. The methods that we present are implemented in the OpenMC Monte Carlo neutron transport code as part of this work. The gains in computational efficiency, as measured by overall code speedup, associated with each of the optimizations are demonstrated in both serial and multithreaded simulations of realistic systems. Depending on the system, simulation parameters, and optimization method employed, overall code speedup factors of 1.2-1.5, relative to the typical single-nuclide binary search algorithm, are routinely observed.
Optimization technique of wavefront coding system based on ZEMAX externally compiled programs
NASA Astrophysics Data System (ADS)
Han, Libo; Dong, Liquan; Liu, Ming; Zhao, Yuejin; Liu, Xiaohua
2016-10-01
Wavefront coding technique as a means of athermalization applied to infrared imaging system, the design of phase plate is the key to system performance. This paper apply the externally compiled programs of ZEMAX to the optimization of phase mask in the normal optical design process, namely defining the evaluation function of wavefront coding system based on the consistency of modulation transfer function (MTF) and improving the speed of optimization by means of the introduction of the mathematical software. User write an external program which computes the evaluation function on account of the powerful computing feature of the mathematical software in order to find the optimal parameters of phase mask, and accelerate convergence through generic algorithm (GA), then use dynamic data exchange (DDE) interface between ZEMAX and mathematical software to realize high-speed data exchanging. The optimization of the rotational symmetric phase mask and the cubic phase mask have been completed by this method, the depth of focus increases nearly 3 times by inserting the rotational symmetric phase mask, while the other system with cubic phase mask can be increased to 10 times, the consistency of MTF decrease obviously, the maximum operating temperature of optimized system range between -40°-60°. Results show that this optimization method can be more convenient to define some unconventional optimization goals and fleetly to optimize optical system with special properties due to its externally compiled function and DDE, there will be greater significance for the optimization of unconventional optical system.
Optimal Multicarrier Phase-Coded Waveform Design for Detection of Extended Targets
Sen, Satyabrata; Glover, Charles Wayne
2013-01-01
We design a parametric multicarrier phase-coded (MCPC) waveform that achieves the optimal performance in detecting an extended target in the presence of signal-dependent interference. Traditional waveform design techniques provide only the optimal energy spectral density of the transmit waveform and suffer a performance loss in the synthesis process of the time-domain signal. Therefore, we opt for directly designing an MCPC waveform in terms of its time-frequency codes to obtain the optimal detection performance. First, we describe the modeling assumptions considering an extended target buried within the signal-dependent clutter with known power spectral density, and deduce the performance characteristics of the optimal detector. Then, considering an MCPC signal transmission, we express the detection characteristics in terms of the phase-codes of the MCPC waveform and propose to optimally design the MCPC signal by maximizing the detection probability. Our numerical results demonstrate that the designed MCPC signal attains the optimal detection performance and requires a lesser computational time than the other parametric waveform design approach.
User’s Manual for Solid Propulsion Optimization Code (SPOC). Volume I. Technical Description
1981-08-01
COF &EPCOiT & PIRIOD COVERED User’s Manual for Solid Propulsion User’s Guide Optimization Code (SPOC) Z8 Mar 80 - 21 Aug 81 Volumne I - Technical...trinitramine Rate Catalyst RCATS FeZO -- Iron Oxide (Sblid) FCH -- Ferrocene Rate Catalyst RCATL None available at the present (Liquid) Combustion STAB
Optimizing the use of a sensor resource for opponent polarization coding
Heras, Francisco J.H.
2017-01-01
Flies use specialized photoreceptors R7 and R8 in the dorsal rim area (DRA) to detect skylight polarization. R7 and R8 form a tiered waveguide (central rhabdomere pair, CRP) with R7 on top, filtering light delivered to R8. We examine how the division of a given resource, CRP length, between R7 and R8 affects their ability to code polarization angle. We model optical absorption to show how the length fractions allotted to R7 and R8 determine the rates at which they transduce photons, and correct these rates for transduction unit saturation. The rates give polarization signal and photon noise in R7, and in R8. Their signals are combined in an opponent unit, intrinsic noise added, and the unit’s output analysed to extract two measures of coding ability, number of discriminable polarization angles and mutual information. A very long R7 maximizes opponent signal amplitude, but codes inefficiently due to photon noise in the very short R8. Discriminability and mutual information are optimized by maximizing signal to noise ratio, SNR. At lower light levels approximately equal lengths of R7 and R8 are optimal because photon noise dominates. At higher light levels intrinsic noise comes to dominate and a shorter R8 is optimum. The optimum R8 length fractions falls to one third. This intensity dependent range of optimal length fractions corresponds to the range observed in different fly species and is not affected by transduction unit saturation. We conclude that a limited resource, rhabdom length, can be divided between two polarization sensors, R7 and R8, to optimize opponent coding. We also find that coding ability increases sub-linearly with total rhabdom length, according to the law of diminishing returns. Consequently, the specialized shorter central rhabdom in the DRA codes polarization twice as efficiently with respect to rhabdom length than the longer rhabdom used in the rest of the eye. PMID:28316880
Liner Optimization Studies Using the Ducted Fan Noise Prediction Code TBIEM3D
NASA Technical Reports Server (NTRS)
Dunn, M. H.; Farassat, F.
1998-01-01
In this paper we demonstrate the usefulness of the ducted fan noise prediction code TBIEM3D as a liner optimization design tool. Boundary conditions on the interior duct wall allow for hard walls or a locally reacting liner with axially segmented, circumferentially uniform impedance. Two liner optimization studies are considered in which farfield noise attenuation due to the presence of a liner is maximized by adjusting the liner impedance. In the first example, the dependence of optimal liner impedance on frequency and liner length is examined. Results show that both the optimal impedance and attenuation levels are significantly influenced by liner length and frequency. In the second example, TBIEM3D is used to compare radiated sound pressure levels between optimal and non-optimal liner cases at conditions designed to simulate take-off. It is shown that significant noise reduction is achieved for most of the sound field by selecting the optimal or near optimal liner impedance. Our results also indicate that there is relatively large region of the impedance plane over which optimal or near optimal liner behavior is attainable. This is an important conclusion for the designer since there are variations in liner characteristics due to manufacturing imprecisions.
Video coding using arbitrarily shaped block partitions in globally optimal perspective
NASA Astrophysics Data System (ADS)
Paul, Manoranjan; Murshed, Manzur
2011-12-01
Algorithms using content-based patterns to segment moving regions at the macroblock (MB) level have exhibited good potential for improved coding efficiency when embedded into the H.264 standard as an extra mode. The content-based pattern generation (CPG) algorithm provides local optimal result as only one pattern can be optimally generated from a given set of moving regions. But, it failed to provide optimal results for multiple patterns from entire sets. Obviously, a global optimal solution for clustering the set and then generation of multiple patterns enhances the performance farther. But a global optimal solution is not achievable due to the non-polynomial nature of the clustering problem. In this paper, we propose a near- optimal content-based pattern generation (OCPG) algorithm which outperforms the existing approach. Coupling OCPG, generating a set of patterns after clustering the MBs into several disjoint sets, with a direct pattern selection algorithm by allowing all the MBs in multiple pattern modes outperforms the existing pattern-based coding when embedded into the H.264.
Final Report A Multi-Language Environment For Programmable Code Optimization and Empirical Tuning
Yi, Qing; Whaley, Richard Clint; Qasem, Apan; Quinlan, Daniel
2013-11-23
This report summarizes our effort and results of building an integrated optimization environment to effectively combine the programmable control and the empirical tuning of source-to-source compiler optimizations within the framework of multiple existing languages, specifically C, C++, and Fortran. The environment contains two main components: the ROSE analysis engine, which is based on the ROSE C/C++/Fortran2003 source-to-source compiler developed by Co-PI Dr.Quinlan et. al at DOE/LLNL, and the POET transformation engine, which is based on an interpreted program transformation language developed by Dr. Yi at University of Texas at San Antonio (UTSA). The ROSE analysis engine performs advanced compiler analysis, identifies profitable code transformations, and then produces output in POET, a language designed to provide programmable control of compiler optimizations to application developers and to support the parameterization of architecture-sensitive optimizations so that their configurations can be empirically tuned later. This POET output can then be ported to different machines together with the user application, where a POET-based search engine empirically reconfigures the parameterized optimizations until satisfactory performance is found. Computational specialists can write POET scripts to directly control the optimization of their code. Application developers can interact with ROSE to obtain optimization feedback as well as provide domain-specific knowledge and high-level optimization strategies. The optimization environment is expected to support different levels of automation and programmer intervention, from fully-automated tuning to semi-automated development and to manual programmable control.
Optimal performance of networked control systems with bandwidth and coding constraints.
Zhan, Xi-Sheng; Sun, Xin-xiang; Li, Tao; Wu, Jie; Jiang, Xiao-Wei
2015-11-01
The optimal tracking performance of multiple-input multiple-output (MIMO) discrete-time networked control systems with bandwidth and coding constraints is studied in this paper. The optimal tracking performance of networked control system is obtained by using spectral factorization technique and partial fraction. The obtained results demonstrate that the optimal performance is influenced by the directions and locations of the nonminimum phase zeros and unstable poles of the given plant. In addition to that, the characters of the reference signal, encoding, the bandwidth and additive white Gaussian noise (AWGN) of the communication channel are also closely influenced by the optimal tracking performance. Some typical examples are given to illustrate the theoretical results.
The SWAN/NPSOL code system for multivariable multiconstraint shield optimization
Watkins, E.F.; Greenspan, E.
1995-12-31
SWAN is a useful code for optimization of source-driven systems, i.e., systems for which the neutron and photon distribution is the solution of the inhomogeneous transport equation. Over the years, SWAN has been applied to the optimization of a variety of nuclear systems, such as minimizing the thickness of fusion reactor blankets and shields, the weight of space reactor shields, the cost for an ICF target chamber shield, and the background radiation for explosive detection systems and maximizing the beam quality for boron neutron capture therapy applications. However, SWAN`s optimization module can handle up to a single constraint and was inefficient in handling problems with many variables. The purpose of this work is to upgrade SWAN`s optimization capability.
From Physics Model to Results: An Optimizing Framework for Cross-Architecture Code Generation
Blazewicz, Marek; Hinder, Ian; Koppelman, David M.; ...
2013-01-01
Starting from a high-level problem description in terms of partial differential equations using abstract tensor notation, the Chemora framework discretizes, optimizes, and generates complete high performance codes for a wide range of compute architectures. Chemora extends the capabilities of Cactus, facilitating the usage of large-scale CPU/GPU systems in an efficient manner for complex applications, without low-level code tuning. Chemora achieves parallelism through MPI and multi-threading, combining OpenMP and CUDA. Optimizations include high-level code transformations, efficient loop traversal strategies, dynamically selected data and instruction cache usage strategies, and JIT compilation of GPU code tailored to the problem characteristics. The discretizationmore » is based on higher-order finite differences on multi-block domains. Chemora's capabilities are demonstrated by simulations of black hole collisions. This problem provides an acid test of the framework, as the Einstein equations contain hundreds of variables and thousands of terms.« less
NASA Astrophysics Data System (ADS)
Chen, Zhenzhong; Han, Junwei; Ngan, King Ngi
2005-10-01
MPEG-4 treats a scene as a composition of several objects or so-called video object planes (VOPs) that are separately encoded and decoded. Such a flexible video coding framework makes it possible to code different video object with different distortion scale. It is necessary to analyze the priority of the video objects according to its semantic importance, intrinsic properties and psycho-visual characteristics such that the bit budget can be distributed properly to video objects to improve the perceptual quality of the compressed video. This paper aims to provide an automatic video object priority definition method based on object-level visual attention model and further propose an optimization framework for video object bit allocation. One significant contribution of this work is that the human visual system characteristics are incorporated into the video coding optimization process. Another advantage is that the priority of the video object can be obtained automatically instead of fixing weighting factors before encoding or relying on the user interactivity. To evaluate the performance of the proposed approach, we compare it with traditional verification model bit allocation and the optimal multiple video object bit allocation algorithms. Comparing with traditional bit allocation algorithms, the objective quality of the object with higher priority is significantly improved under this framework. These results demonstrate the usefulness of this unsupervised subjective quality lifting framework.
ETRANS: an energy transport system optimization code for distributed networks of solar collectors
Barnhart, J.S.
1980-09-01
The optimization code ETRANS was developed at the Pacific Northwest Laboratory to design and estimate the costs associated with energy transport systems for distributed fields of solar collectors. The code uses frequently cited layouts for dish and trough collectors and optimizes them on a section-by-section basis. The optimal section design is that combination of pipe diameter and insulation thickness that yields the minimum annualized system-resultant cost. Among the quantities included in the costing algorithm are (1) labor and materials costs associated with initial plant construction, (2) operating expenses due to daytime and nighttime heat losses, and (3) operating expenses due to pumping power requirements. Two preliminary series of simulations were conducted to exercise the code. The results indicate that transport system costs for both dish and trough collector fields increase with field size and receiver exit temperature. Furthermore, dish collector transport systems were found to be much more expensive to build and operate than trough transport systems. ETRANS itself is stable and fast-running and shows promise of being a highly effective tool for the analysis of distributed solar thermal systems.
An application of anti-optimization in the process of validating aerodynamic codes
NASA Astrophysics Data System (ADS)
Cruz, Juan R.
An investigation was conducted to assess the usefulness of anti-optimization in the process of validating of aerodynamic codes. Anti-optimization is defined here as the intentional search for regions where the computational and experimental results disagree. Maximizing such disagreements can be a useful tool in uncovering errors and/or weaknesses in both analyses and experiments. The codes chosen for this investigation were an airfoil code and a lifting line code used together as an analysis to predict three-dimensional wing aerodynamic coefficients. The parameter of interest was the maximum lift coefficient of the three-dimensional wing, CL max. The test domain encompassed Mach numbers from 0.3 to 0.8, and Reynolds numbers from 25,000 to 250,000. A simple rectangular wing was designed for the experiment. A wind tunnel model of this wing was built and tested in the NASA Langley Transonic Dynamics Tunnel. Selection of the test conditions (i.e., Mach and Reynolds numbers) were made by applying the techniques of response surface methodology and considerations involving the predicted experimental uncertainty. The test was planned and executed in two phases. In the first phase runs were conducted at the pre-planned test conditions. Based on these results additional runs were conducted in areas where significant differences in CL max were observed between the computational results and the experiment---in essence applying the concept of anti-optimization. These additional runs were used to verify the differences in CL max and assess the extent of the region where these differences occurred. The results of the experiment showed that the analysis was capable of predicting CL max to within 0.05 over most of the test domain. The application of anti-optimization succeeded in identifying a region where the computational and experimental values of C L max differed by more than 0.05, demonstrating the usefulness of anti-optimization in process of validating aerodynamic codes
NASA Astrophysics Data System (ADS)
Aggarwal, Neha; Vishwa Bandhu, Ashutosh; Sengupta, Supratim
2016-06-01
The origin of a universal and optimal genetic code remains a compelling mystery in molecular biology and marks an essential step in the origin of DNA and protein based life. We examine a collective evolution model of genetic code origin that allows for unconstrained horizontal transfer of genetic elements within a finite population of sequences each of which is associated with a genetic code selected from a pool of primordial codes. We find that when horizontal transfer of genetic elements is incorporated in this more realistic model of code-sequence coevolution in a finite population, it can increase the likelihood of emergence of a more optimal code eventually leading to its universality through fixation in the population. The establishment of such an optimal code depends on the probability of HGT events. Only when the probability of HGT events is above a critical threshold, we find that the ten amino acid code having a structure that is most consistent with the standard genetic code (SGC) often gets fixed in the population with the highest probability. We examine how the threshold is determined by factors like the population size, length of the sequences and selection coefficient. Our simulation results reveal the conditions under which sharing of coding innovations through horizontal transfer of genetic elements may have facilitated the emergence of a universal code having a structure similar to that of the SGC.
Aggarwal, Neha; Bandhu, Ashutosh Vishwa; Sengupta, Supratim
2016-05-27
The origin of a universal and optimal genetic code remains a compelling mystery in molecular biology and marks an essential step in the origin of DNA and protein based life. We examine a collective evolution model of genetic code origin that allows for unconstrained horizontal transfer of genetic elements within a finite population of sequences each of which is associated with a genetic code selected from a pool of primordial codes. We find that when horizontal transfer of genetic elements is incorporated in this more realistic model of code-sequence coevolution in a finite population, it can increase the likelihood of emergence of a more optimal code eventually leading to its universality through fixation in the population. The establishment of such an optimal code depends on the probability of HGT events. Only when the probability of HGT events is above a critical threshold, we find that the ten amino acid code having a structure that is most consistent with the standard genetic code (SGC) often gets fixed in the population with the highest probability. We examine how the threshold is determined by factors like the population size, length of the sequences and selection coefficient. Our simulation results reveal the conditions under which sharing of coding innovations through horizontal transfer of genetic elements may have facilitated the emergence of a universal code having a structure similar to that of the SGC.
On the Efficacy of Source Code Optimizations for Cache-Based Systems
NASA Technical Reports Server (NTRS)
VanderWijngaart, Rob F.; Saphir, William C.; Saini, Subhash (Technical Monitor)
1998-01-01
Obtaining high performance without machine-specific tuning is an important goal of scientific application programmers. Since most scientific processing is done on commodity microprocessors with hierarchical memory systems, this goal of "portable performance" can be achieved if a common set of optimization principles is effective for all such systems. It is widely believed, or at least hoped, that portable performance can be realized. The rule of thumb for optimization on hierarchical memory systems is to maximize temporal and spatial locality of memory references by reusing data and minimizing memory access stride. We investigate the effects of a number of optimizations on the performance of three related kernels taken from a computational fluid dynamics application. Timing the kernels on a range of processors, we observe an inconsistent and often counterintuitive impact of the optimizations on performance. In particular, code variations that have a positive impact on one architecture can have a negative impact on another, and variations expected to be unimportant can produce large effects. Moreover, we find that cache miss rates-as reported by a cache simulation tool, and confirmed by hardware counters-only partially explain the results. By contrast, the compiler-generated assembly code provides more insight by revealing the importance of processor-specific instructions and of compiler maturity, both of which strongly, and sometimes unexpectedly, influence performance. We conclude that it is difficult to obtain performance portability on modern cache-based computers, and comment on the implications of this result.
On the Efficacy of Source Code Optimizations for Cache-Based Systems
NASA Technical Reports Server (NTRS)
VanderWijngaart, Rob F.; Saphir, William C.
1998-01-01
Obtaining high performance without machine-specific tuning is an important goal of scientific application programmers. Since most scientific processing is done on commodity microprocessors with hierarchical memory systems, this goal of "portable performance" can be achieved if a common set of optimization principles is effective for all such systems. It is widely believed, or at least hoped, that portable performance can be realized. The rule of thumb for optimization on hierarchical memory systems is to maximize temporal and spatial locality of memory references by reusing data and minimizing memory access stride. We investigate the effects of a number of optimizations on the performance of three related kernels taken from a computational fluid dynamics application. Timing the kernels on a range of processors, we observe an inconsistent and often counterintuitive impact of the optimizations on performance. In particular, code variations that have a positive impact on one architecture can have a negative impact on another, and variations expected to be unimportant can produce large effects. Moreover, we find that cache miss rates - as reported by a cache simulation tool, and confirmed by hardware counters - only partially explain the results. By contrast, the compiler-generated assembly code provides more insight by revealing the importance of processor-specific instructions and of compiler maturity, both of which strongly, and sometimes unexpectedly, influence performance. We conclude that it is difficult to obtain performance portability on modern cache-based computers, and comment on the implications of this result.
MPEG-2/4 Low-Complexity Advanced Audio Coding Optimization and Implementation on DSP
NASA Astrophysics Data System (ADS)
Wu, Bing-Fei; Huang, Hao-Yu; Chen, Yen-Lin; Peng, Hsin-Yuan; Huang, Jia-Hsiung
This study presents several optimization approaches for the MPEG-2/4 Audio Advanced Coding (AAC) Low Complexity (LC) encoding and decoding processes. Considering the power consumption and the peripherals required for consumer electronics, this study adopts the TI OMAP5912 platform for portable devices. An important optimization issue for implementing AAC codec on embedded and mobile devices is to reduce computational complexity and memory consumption. Due to power saving issues, most embedded and mobile systems can only provide very limited computational power and memory resources for the coding process. As a result, modifying and simplifying only one or two blocks is insufficient for optimizing the AAC encoder and enabling it to work well on embedded systems. It is therefore necessary to enhance the computational efficiency of other important modules in the encoding algorithm. This study focuses on optimizing the Temporal Noise Shaping (TNS), Mid/Side (M/S) Stereo, Modified Discrete Cosine Transform (MDCT) and Inverse Quantization (IQ) modules in the encoder and decoder. Furthermore, we also propose an efficient memory reduction approach that provides a satisfactory balance between the reduction of memory usage and the expansion of the encoded files. In the proposed design, both the AAC encoder and decoder are built with fixed-point arithmetic operations and implemented on a DSP processor combined with an ARM-core for peripheral controlling. Experimental results demonstrate that the proposed AAC codec is computationally effective, has low memory consumption, and is suitable for low-cost embedded and mobile applications.
NASA Technical Reports Server (NTRS)
Martini, William R.
1989-01-01
A FORTRAN computer code is described that could be used to design and optimize a free-displacer, free-piston Stirling engine similar to the RE-1000 engine made by Sunpower. The code contains options for specifying displacer and power piston motion or for allowing these motions to be calculated by a force balance. The engine load may be a dashpot, inertial compressor, hydraulic pump or linear alternator. Cycle analysis may be done by isothermal analysis or adiabatic analysis. Adiabatic analysis may be done using the Martini moving gas node analysis or the Rios second-order Runge-Kutta analysis. Flow loss and heat loss equations are included. Graphical display of engine motions and pressures and temperatures are included. Programming for optimizing up to 15 independent dimensions is included. Sample performance results are shown for both specified and unconstrained piston motions; these results are shown as generated by each of the two Martini analyses. Two sample optimization searches are shown using specified piston motion isothermal analysis. One is for three adjustable input and one is for four. Also, two optimization searches for calculated piston motion are presented for three and for four adjustable inputs. The effect of leakage is evaluated. Suggestions for further work are given.
SOAR: An extensible suite of codes for weld analysis and optimal weld schedules
Eisler, G.R.; Fuerschbach, P.W.
1997-07-01
A suite of MATLAB-based code modules has been developed to provide optimal weld schedules, regulating weld process parameters for CO2 and pulse Nd:YAG laser welding methods, and arc welding in support of the Smartweld manufacturing initiative at Sandia National Laboratories. The optimization methodology consists of mixed genetic and gradient-based algorithms to query semi-empirical, nonlinear algebraic models. The optimization output provides heat-input-efficient welds for user-specified weld dimensions. User querying of all weld models is available to examine sub-optimal schedules. In addition, a heat conduction equation solver for 2-D heat flow is available to provide the user with an additional check of weld thermal effects. The inclusion of thermodynamic properties allows the extension of the empirical models to include materials other than those tested. All solution methods are provided with graphical user interfaces and display pertinent results in two and three-dimensional form. The code architecture provides an extensible framework to add an arbitrary number of modules.
... fatty tissue. On a mammogram, fatty tissue appears dark (radio-lucent) and the glandular and connective tissues ... white on mammography) and non-dense fatty tissue (dark on mammography) using a visual scale and assign ...
A optimized context-based adaptive binary arithmetic coding algorithm in progressive H.264 encoder
NASA Astrophysics Data System (ADS)
Xiao, Guang; Shi, Xu-li; An, Ping; Zhang, Zhao-yang; Gao, Ge; Teng, Guo-wei
2006-05-01
Context-based Adaptive Binary Arithmetic Coding (CABAC) is a new entropy coding method presented in H.264/AVC that is highly efficient in video coding. In the method, the probability of current symbol is estimated by using the wisely designed context model, which is adaptive and can approach to the statistic characteristic. Then an arithmetic coding mechanism largely reduces the redundancy in inter-symbol. Compared with UVLC method in the prior standard, CABAC is complicated but efficiently reduce the bit rate. Based on thorough analysis of coding and decoding methods of CABAC, This paper proposed two methods, sub-table method and stream-reuse methods, to improve the encoding efficiency implemented in H.264 JM code. In JM, the CABAC function produces bits one by one of every syntactic element. Multiplication operating times after times in the CABAC function lead to it inefficient.The proposed algorithm creates tables beforehand and then produce every bits of syntactic element. In JM, intra-prediction and inter-prediction mode selection algorithm with different criterion is based on RDO(rate distortion optimization) model. One of the parameter of the RDO model is bit rate that is produced by CABAC operator. After intra-prediction or inter-prediction mode selection, the CABAC stream is discard and is recalculated to output stream. The proposed Stream-reuse algorithm puts the stream in memory that is created in mode selection algorithm and reuses it in encoding function. Experiment results show that our proposed algorithm can averagely speed up 17 to 78 MSEL higher speed for QCIF and CIF sequences individually compared with the original algorithm of JM at the cost of only a little memory space. The CABAC was realized in our progressive h.264 encoder.
The SWAN-SCALE code for the optimization of critical systems
Greenspan, E.; Karni, Y.; Regev, D.; Petrie, L.M.
1999-07-01
The SWAN optimization code was recently developed to identify the maximum value of k{sub eff} for a given mass of fissile material when in combination with other specified materials. The optimization process is iterative; in each iteration SWAN varies the zone-dependent concentration of the system constituents. This change is guided by the equal volume replacement effectiveness functions (EVREF) that SWAN generates using first-order perturbation theory. Previously, SWAN did not have provisions to account for the effect of the composition changes on neutron cross-section resonance self-shielding; it used the cross sections corresponding to the initial system composition. In support of the US Department of Energy Nuclear Criticality Safety Program, the authors recently removed the limitation on resonance self-shielding by coupling SWAN with the SCALE code package. The purpose of this paper is to briefly describe the resulting SWAN-SCALE code and to illustrate the effect that neutron cross-section self-shielding could have on the maximum k{sub eff} and on the corresponding system composition.
NASA Astrophysics Data System (ADS)
Wilson, Joseph N.; Chen, LiangMing
1999-10-01
Various researchers have realized the value of implementing loop fusion to evaluate dense (pointwise) array expressions. Recently, the method of template metaprogramming in C++ has been used to significantly speed-up the evaluation of array expressions, allowing C++ programs to achieve performance comparable to or better than FORTRAN for numerical analysis applications. Unfortunately, the template metaprogramming technique suffers from several limitations in applicability, portability, and potential performance. We present a framework for evaluating dense array expressions in object-oriented programming languages. We demonstrate how this technique supports both common subexpression elimination and threaded implementation and compare its performance to object-library and hand-generated code.
Operationally optimal vertex-based shape coding with arbitrary direction edge encoding structures
NASA Astrophysics Data System (ADS)
Lai, Zhongyuan; Zhu, Junhuan; Luo, Jiebo
2014-07-01
The intention of shape coding in the MPEG-4 is to improve the coding efficiency as well as to facilitate the object-oriented applications, such as shape-based object recognition and retrieval. These require both efficient shape compression and effective shape description. Although these two issues have been intensively investigated in data compression and pattern recognition fields separately, it remains an open problem when both objectives need to be considered together. To achieve high coding gain, the operational rate-distortion optimal framework can be applied, but the direction restriction of the traditional eight-direction edge encoding structure reduces its compression efficiency and description effectiveness. We present two arbitrary direction edge encoding structures to relax this direction restriction. They consist of a sector number, a short component, and a long component, which represent both the direction and the magnitude information of an encoding edge. Experiments on both shape coding and hand gesture recognition validate that our structures can reduce a large number of encoding vertices and save up to 48.9% bits. Besides, the object contours are effectively described and suitable for the object-oriented applications.
End-to-End Rate-Distortion Optimized MD Mode Selection for Multiple Description Video Coding
NASA Astrophysics Data System (ADS)
Heng, Brian A.; Apostolopoulos, John G.; Lim, Jae S.
2006-12-01
Multiple description (MD) video coding can be used to reduce the detrimental effects caused by transmission over lossy packet networks. A number of approaches have been proposed for MD coding, where each provides a different tradeoff between compression efficiency and error resilience. How effectively each method achieves this tradeoff depends on the network conditions as well as on the characteristics of the video itself. This paper proposes an adaptive MD coding approach which adapts to these conditions through the use of adaptive MD mode selection. The encoder in this system is able to accurately estimate the expected end-to-end distortion, accounting for both compression and packet loss-induced distortions, as well as for the bursty nature of channel losses and the effective use of multiple transmission paths. With this model of the expected end-to-end distortion, the encoder selects between MD coding modes in a rate-distortion (R-D) optimized manner to most effectively tradeoff compression efficiency for error resilience. We show how this approach adapts to both the local characteristics of the video and network conditions and demonstrates the resulting gains in performance using an H.264-based adaptive MD video coder.
A treatment planning code for inverse planning and 3D optimization in hadrontherapy.
Bourhaleb, F; Marchetto, F; Attili, A; Pittà, G; Cirio, R; Donetti, M; Giordanengo, S; Givehchi, N; Iliescu, S; Krengli, M; La Rosa, A; Massai, D; Pecka, A; Pardo, J; Peroni, C
2008-09-01
The therapeutic use of protons and ions, especially carbon ions, is a new technique and a challenge to conform the dose to the target due to the energy deposition characteristics of hadron beams. An appropriate treatment planning system (TPS) is strictly necessary to take full advantage. We developed a TPS software, ANCOD++, for the evaluation of the optimal conformal dose. ANCOD++ is an analytical code using the voxel-scan technique as an active method to deliver the dose to the patient, and provides treatment plans with both proton and carbon ion beams. The iterative algorithm, coded in C++ and running on Unix/Linux platform, allows the determination of the best fluences of the individual beams to obtain an optimal physical dose distribution, delivering a maximum dose to the target volume and a minimum dose to critical structures. The TPS is supported by Monte Carlo simulations with the package GEANT3 to provide the necessary physical lookup tables and verify the optimized treatment plans. Dose verifications done by means of full Monte Carlo simulations show an overall good agreement with the treatment planning calculations. We stress the fact that the purpose of this work is the verification of the physical dose and a next work will be dedicated to the radiobiological evaluation of the equivalent biological dose.
Error threshold in optimal coding, numerical criteria, and classes of universalities for complexity
NASA Astrophysics Data System (ADS)
Saakian, David B.
2005-01-01
The free energy of the random energy model at the transition point between the ferromagnetic and spin glass phases is calculated. At this point, equivalent to the decoding error threshold in optimal codes, the free energy has finite size corrections proportional to the square root of the number of degrees. The response of the magnetization to an external ferromagnetic phase is maximal at values of magnetization equal to one-half. We give several criteria of complexity and define different universality classes. According to our classification, at the lowest class of complexity are random graphs, Markov models, and hidden Markov models. At the next level is the Sherrington-Kirkpatrick spin glass, connected to neuron-network models. On a higher level are critical theories, the spin glass phase of the random energy model, percolation, and self-organized criticality. The top level class involves highly optimized tolerance design, error thresholds in optimal coding, language, and, maybe, financial markets. Living systems are also related to the last class. The concept of antiresonance is suggested for complex systems.
Optimizing performance of superscalar codes for a single Cray X1MSP processor
Shan, Hongzhang; Strohmaier, Erich; Oliker, Leonid
2004-06-08
The growing gap between sustained and peak performance for full-scale complex scientific applications on conventional supercomputers is a major concern in high performance computing. The recently-released vector-based Cray X1 offers to bridge this gap for many demanding scientific applications. However, this unique architecture contains both data caches and multi-streaming processing units, and the optimal programming methodology is still under investigation. In this paper we investigate Cray X1 code optimization for a suite of computational kernels originally designed for superscalar processors. For our study, we select four applications from the SPLASH2 application suite (1-D FFT,Radix, Ocean, and Nbody), two kernels from the NAS benchmark suite (3-DFFT and CG), and a matrix-matrix multiplication kernel. Results show that for many cases, the addition of vectorization compiler directives results faster runtimes. However, to achieve a significant performance improvement via increased vector length, it is often necessary to restructure the program at the source level sometimes leading to algorithmic level transformations. Additionally, memory bank conflicts may result in substantial performance losses. These conflicts can often be exacerbated when optimizing code for increased vector lengths, and must be explicitly minimized. Finally, we investigate the relationship of the X1 data caches on overall performance.
NASA Astrophysics Data System (ADS)
Lin, Chao; Shen, Xueju; Hua, Binbin; Wang, Zhisong
2015-10-01
We demonstrate the feasibility of three dimensional (3D) polarization multiplexing by optimizing a single vectorial beam using a multiple-signal window multiple-plane (MSW-MP) phase retrieval algorithm. Original messages represented with multiple quick response (QR) codes are first partitioned into a series of subblocks. Then, each subblock is marked with a specific polarization state and randomly distributed in 3D space with both longitudinal and transversal adjustable freedoms. A generalized 3D polarization mapping protocol is established to generate a 3D polarization key. Finally, multiple-QR code is encrypted into one phase only mask and one polarization only mask based on the modified Gerchberg-Saxton (GS) algorithm. We take the polarization mask as the cyphertext and the phase only mask as additional dimension of key. Only when both the phase key and 3D polarization key are correct, original messages can be recovered. We verify our proposal with both simulation and experiment evidences.
Boulgouris, N V; Tzovaras, D; Strintzis, M G
2001-01-01
The optimal predictors of a lifting scheme in the general n-dimensional case are obtained and applied for the lossless compression of still images using first quincunx sampling and then simple row-column sampling. In each case, the efficiency of the linear predictors is enhanced nonlinearly. Directional postprocessing is used in the quincunx case, and adaptive-length postprocessing in the row-column case. Both methods are seen to perform well. The resulting nonlinear interpolation schemes achieve extremely efficient image decorrelation. We further investigate context modeling and adaptive arithmetic coding of wavelet coefficients in a lossless compression framework. Special attention is given to the modeling contexts and the adaptation of the arithmetic coder to the actual data. Experimental evaluation shows that the best of the resulting coders produces better results than other known algorithms for multiresolution-based lossless image coding.
MINVAR: a local optimization criterion for rate-distortion tradeoff in real time video coding
NASA Astrophysics Data System (ADS)
Chen, Zhenzhong; Ngan, King Ngi
2005-10-01
In this paper, we propose a minimum variation (MINVAR) distortion criterion based approach for the rate distortion tradeoff in video coding. The MINVAR based rate distortion tradeoff framework provides a local optimization strategy as a rate control mechanism in real time video coding applications by minimizing the distortion variation while the corresponding bit rate fluctuation is limited by utilizing the encoder buffer. We use the H.264 video codec to evaluate the performance of the proposed method. As shown in the simulation results, the decoded picture quality of the proposed approach is smoother than that of the traditional H.264 joint model (JM) rate control algorithm. The global video quality, the average PSNR, is maintained while a better subjective visual quality is guaranteed.
Code Optimization and Parallelization on the Origins: Looking from Users' Perspective
NASA Technical Reports Server (NTRS)
Chang, Yan-Tyng Sherry; Thigpen, William W. (Technical Monitor)
2002-01-01
Parallel machines are becoming the main compute engines for high performance computing. Despite their increasing popularity, it is still a challenge for most users to learn the basic techniques to optimize/parallelize their codes on such platforms. In this paper, we present some experiences on learning these techniques for the Origin systems at the NASA Advanced Supercomputing Division. Emphasis of this paper will be on a few essential issues (with examples) that general users should master when they work with the Origins as well as other parallel systems.
NASA Astrophysics Data System (ADS)
Dou, Tai H.; Min, Yugang; Neylon, John; Thomas, David; Kupelian, Patrick; Santhanam, Anand P.
2016-03-01
Deformable image registration (DIR) is an important step in radiotherapy treatment planning. An optimal input registration parameter set is critical to achieve the best registration performance with the specific algorithm. Methods In this paper, we investigated a parameter optimization strategy for Optical-flow based DIR of the 4DCT lung anatomy. A novel fast simulated annealing with adaptive Monte Carlo sampling algorithm (FSA-AMC) was investigated for solving the complex non-convex parameter optimization problem. The metric for registration error for a given parameter set was computed using landmark-based mean target registration error (mTRE) between a given volumetric image pair. To reduce the computational time in the parameter optimization process, a GPU based 3D dense optical-flow algorithm was employed for registering the lung volumes. Numerical analyses on the parameter optimization for the DIR were performed using 4DCT datasets generated with breathing motion models and open-source 4DCT datasets. Results showed that the proposed method efficiently estimated the optimum parameters for optical-flow and closely matched the best registration parameters obtained using an exhaustive parameter search method.
NASA Astrophysics Data System (ADS)
Peredo, Oscar; Ortiz, Julián M.; Herrero, José R.
2015-12-01
The Geostatistical Software Library (GSLIB) has been used in the geostatistical community for more than thirty years. It was designed as a bundle of sequential Fortran codes, and today it is still in use by many practitioners and researchers. Despite its widespread use, few attempts have been reported in order to bring this package to the multi-core era. Using all CPU resources, GSLIB algorithms can handle large datasets and grids, where tasks are compute- and memory-intensive applications. In this work, a methodology is presented to accelerate GSLIB applications using code optimization and hybrid parallel processing, specifically for compute-intensive applications. Minimal code modifications are added decreasing as much as possible the elapsed time of execution of the studied routines. If multi-core processing is available, the user can activate OpenMP directives to speed up the execution using all resources of the CPU. If multi-node processing is available, the execution is enhanced using MPI messages between the compute nodes.Four case studies are presented: experimental variogram calculation, kriging estimation, sequential gaussian and indicator simulation. For each application, three scenarios (small, large and extra large) are tested using a desktop environment with 4 CPU-cores and a multi-node server with 128 CPU-nodes. Elapsed times, speedup and efficiency results are shown.
Li, Hui; Li, Shengtai; Jungman, Gerard; Hayes-Sterbenz, Anna Catherine
2016-08-31
The mechanisms for pinch formation in Dense Plasma Focus (DPF) devices, with the generation of high-energy ions beams and subsequent neutron production over a relatively short distance, are not fully understood. Here we report on high-fidelity 2D and 3D numerical magnetohydrodynamic (MHD) simulations using the LA-COMPASS code to study the pinch formation dynamics and its associated instabilities and neutron production.
Compiler blockability of dense matrix factorizations.
Carr, S.; Lehoucq, R. B.; Mathematics and Computer Science; Michigan Technological Univ.
1997-09-01
The goal of the LAPACK project is to provide efficient and portable software for dense numerical linear algebra computations. By recasting many of the fundamental dense matrix computations in terms of calls to an efficient implementation of the BLAS (Basic Linear Algebra Subprograms), the LAPACK project has, in large part, achieved its goal. Unfortunately, the efficient implementation of the BLAS results often in machine-specific code that is not portable across multiple architectures without a significant loss in performance or a significant effort to reoptimize them. This article examines whether most of the hand optimizations performed on matrix factorization codes are unnecessary because they can (and should) be performed by the compiler. We believe that it is better for the programmer to express algorithms in a machine-independent form and allow the compiler to handle the machine-dependent details. This gives the algorithms portability across architectures and removes the error-prone, expensive and tedious process of hand optimization. Although there currently exist no production compilers that can perform all the loop transformations discussed in this article, a description of current research in compiler technology is provided that will prove beneficial to the numerical linear algebra community. We show that the Cholesky and optimized automatically by a compiler to be as efficient as the same hand-optimized version found in LAPACK. We also show that the QR factorization may be optimized by the compiler to perform comparably with the hand-optimized LAPACK version on modest matrix sizes. Our approach allows us to conclude that with the advent of the compiler optimizations discussed in this article, matrix factorizations may be efficiently implemented in a BLAS-less form.
NASA Astrophysics Data System (ADS)
Schmidt, A.; Conley, S. A.; Goeckede, M.; Andrews, A. E.; Masarie, K. A.; Sweeney, C.
2015-12-01
Modeled estimates of net ecosystem exchange (NEE) calculated with CLM4.5 at 4 km horizontal resolution were optimized using a classical Bayesian inversion approach with atmospheric mixing ratio observations from a dense tower network in Oregon. We optimized NEE in monthly batches for the years 2012 through 2014, and determined the associated reduction in flux uncertainties broken up by sub-domains. The WRF-STILT transport model was deployed to link modelled fluxes of CO2 to the concentrations from 5 high precision and accuracy CO2 observation towers equipped with CRDS analyzers. To find the best compromise between aggregation errors and the degrees of freedom in the system, we developed an approach for the spatial structuring of our domain that was informed by an unsupervised clustering approach based on flux values of the prior state vector and information about the land surface, soil, and vegetation distribution that was used in the model. To assess the uncertainty of the transport modeling component within our inverse optimization framework we used the data of 7 airborne measurement campaigns over the Oregon domain during the study period providing detailed information about the errors in the model boundary-layer height and wind field of the transport model. The optimized model was then used to estimate future CO2 budgets for Oregon, including potential effects of LULC changes from conventional agriculture towards energy crops.
Błażej, Paweł; Wnȩtrzak, Małgorzata; Mackiewicz, Paweł
2016-12-01
One of theories explaining the present structure of canonical genetic code assumes that it was optimized to minimize harmful effects of amino acid replacements resulting from nucleotide substitutions and translational errors. A way to testify this concept is to find the optimal code under given criteria and compare it with the canonical genetic code. Unfortunately, the huge number of possible alternatives makes it impossible to find the optimal code using exhaustive methods in sensible time. Therefore, heuristic methods should be applied to search the space of possible solutions. Evolutionary algorithms (EA) seem to be ones of such promising approaches. This class of methods is founded both on mutation and crossover operators, which are responsible for creating and maintaining the diversity of candidate solutions. These operators possess dissimilar characteristics and consequently play different roles in the process of finding the best solutions under given criteria. Therefore, the effective searching for the potential solutions can be improved by applying both of them, especially when these operators are devised specifically for a given problem. To study this subject, we analyze the effectiveness of algorithms for various combinations of mutation and crossover probabilities under three models of the genetic code assuming different restrictions on its structure. To achieve that, we adapt the position based crossover operator for the most restricted model and develop a new type of crossover operator for the more general models. The applied fitness function describes costs of amino acid replacement regarding their polarity. Our results indicate that the usage of crossover operators can significantly improve the quality of the solutions. Moreover, the simulations with the crossover operator optimize the fitness function in the smaller number of generations than simulations without this operator. The optimal genetic codes without restrictions on their structure
NASA Astrophysics Data System (ADS)
Wei, Ying-Kang; Luo, Xiao-Tao; Li, Cheng-Xin; Li, Chang-Jiu
2017-01-01
Magnesium-based alloys have excellent physical and mechanical properties for a lot of applications. However, due to high chemical reactivity, magnesium and its alloys are highly susceptible to corrosion. In this study, Al6061 coating was deposited on AZ31B magnesium by cold spray with a commercial Al6061 powder blended with large-sized stainless steel particles (in-situ shot-peening particles) using nitrogen gas. Microstructure and corrosion behavior of the sprayed coating was investigated as a function of shot-peening particle content in the feedstock. It is found that by introducing the in-situ tamping effect using shot-peening (SP) particles, the plastic deformation of deposited particles is significantly enhanced, thereby resulting in a fully dense Al6061 coating. SEM observations reveal that no SP particle is deposited into Al6061 coating at the optimization spraying parameters. Porosity of the coating significantly decreases from 10.7 to 0.4% as the SP particle content increases from 20 to 60 vol.%. The electrochemical corrosion experiments reveal that this novel in-situ SP-assisted cold spraying is effective to deposit fully dense Al6061 coating through which aqueous solution is not permeable and thus can provide exceptional protection of the magnesium-based materials from corrosion.
Grigorov, Filip; van der Kouwe, Andre J.; Wald, Lawrence L.; Keil, Boris
2015-01-01
Purpose Functional neuroimaging of small cortical patches such as columns is essential for testing computational models of vision, but imaging from cortical columns at conventional 3T fields is exceedingly difficult. By targeting the visual cortex exclusively, we tested whether combined optimization of shape, coil placement, and electronics would yield the necessary gains in signal‐to‐noise ratio (SNR) for submillimeter visual cortex functional MRI (fMRI). Method We optimized the shape of the housing to a population‐averaged atlas. The shape was comfortable without cushions and resulted in the maximally proximal placement of the coil elements. By using small wire loops with the least number of solder joints, we were able to maximize the Q factor of the individual elements. Finally, by planning the placement of the coils using the brain atlas, we were able to target the arrangement of the coil elements to the extent of the visual cortex. Results The combined optimizations led to as much as two‐fold SNR gain compared with a whole‐head 32‐channel coil. This gain was reflected in temporal SNR as well and enabled fMRI mapping at 0.75 mm resolutions using a conventional GRAPPA‐accelerated gradient echo echo planar imaging. Conclusion Integrated optimization of shape, electronics, and element placement can lead to large gains in SNR and empower submillimeter fMRI at 3T. Magn Reson Med 76:321–328, 2016. © 2015 Wiley Periodicals, Inc. PMID:26218835
NASA Astrophysics Data System (ADS)
Piron, R.; Blenski, T.
2011-02-01
The numerical code VAAQP (variational average atom in quantum plasmas), which is based on a fully variational model of equilibrium dense plasmas, is applied to equation-of-state calculations for aluminum, iron, copper, and lead in the warm-dense-matter regime. VAAQP does not impose the neutrality of the Wigner-Seitz ion sphere; it provides the average-atom structure and the mean ionization self-consistently from the solution of the variational equations. The formula used for the electronic pressure is simple and does not require any numerical differentiation. In this paper, the virial theorem is derived in both nonrelativistic and relativistic versions of the model. This theorem allows one to express the electron pressure as a combination of the electron kinetic and interaction energies. It is shown that the model fulfills automatically the virial theorem in the case of local-density approximations to the exchange-correlation free-energy. Applications of the model to the equation-of-state and Hugoniot shock adiabat of aluminum, iron, copper, and lead in the warm-dense-matter regime are presented. Comparisons with other approaches, including the inferno model, and with available experimental data are given. This work allows one to understand the thermodynamic consistency issues in the existing average-atom models. Starting from the case of aluminum, a comparative study of the thermodynamic consistency of the models is proposed. A preliminary study of the validity domain of the inferno model is also included.
Piron, R; Blenski, T
2011-02-01
The numerical code VAAQP (variational average atom in quantum plasmas), which is based on a fully variational model of equilibrium dense plasmas, is applied to equation-of-state calculations for aluminum, iron, copper, and lead in the warm-dense-matter regime. VAAQP does not impose the neutrality of the Wigner-Seitz ion sphere; it provides the average-atom structure and the mean ionization self-consistently from the solution of the variational equations. The formula used for the electronic pressure is simple and does not require any numerical differentiation. In this paper, the virial theorem is derived in both nonrelativistic and relativistic versions of the model. This theorem allows one to express the electron pressure as a combination of the electron kinetic and interaction energies. It is shown that the model fulfills automatically the virial theorem in the case of local-density approximations to the exchange-correlation free-energy. Applications of the model to the equation-of-state and Hugoniot shock adiabat of aluminum, iron, copper, and lead in the warm-dense-matter regime are presented. Comparisons with other approaches, including the inferno model, and with available experimental data are given. This work allows one to understand the thermodynamic consistency issues in the existing average-atom models. Starting from the case of aluminum, a comparative study of the thermodynamic consistency of the models is proposed. A preliminary study of the validity domain of the inferno model is also included.
Optimal design of FIR triplet halfband filter bank and application in image coding.
Kha, H H; Tuan, H D; Nguyen, T Q
2011-02-01
This correspondence proposes an efficient semidefinite programming (SDP) method for the design of a class of linear phase finite impulse response triplet halfband filter banks whose filters have optimal frequency selectivity for a prescribed regularity order. The design problem is formulated as the minimization of the least square error subject to peak error constraints and regularity constraints. By using the linear matrix inequality characterization of the trigonometric semi-infinite constraints, it can then be exactly cast as a SDP problem with a small number of variables and, hence, can be solved efficiently. Several design examples of the triplet halfband filter bank are provided for illustration and comparison with previous works. Finally, the image coding performance of the filter bank is presented.
Optimization and implementation of the integer wavelet transform for image coding.
Grangetto, Marco; Magli, Enrico; Martina, Maurizio; Olmo, Gabriella
2002-01-01
This paper deals with the design and implementation of an image transform coding algorithm based on the integer wavelet transform (IWT). First of all, criteria are proposed for the selection of optimal factorizations of the wavelet filter polyphase matrix to be employed within the lifting scheme. The obtained results lead to the IWT implementations with very satisfactory lossless and lossy compression performance. Then, the effects of finite precision representation of the lifting coefficients on the compression performance are analyzed, showing that, in most cases, a very small number of bits can be employed for the mantissa keeping the performance degradation very limited. Stemming from these results, a VLSI architecture is proposed for the IWT implementation, capable of achieving very high frame rates with moderate gate complexity.
NASA Astrophysics Data System (ADS)
Lokavarapu, H. V.; Matsui, H.
2015-12-01
Convection and magnetic field of the Earth's outer core are expected to have vast length scales. To resolve these flows, high performance computing is required for geodynamo simulations using spherical harmonics transform (SHT), a significant portion of the execution time is spent on the Legendre transform. Calypso is a geodynamo code designed to model magnetohydrodynamics of a Boussinesq fluid in a rotating spherical shell, such as the outer core of the Earth. The code has been shown to scale well on computer clusters capable of computing at the order of 10⁵ cores using Message Passing Interface (MPI) and Open Multi-Processing (OpenMP) parallelization for CPUs. To further optimize, we investigate three different algorithms of the SHT using GPUs. One is to preemptively compute the Legendre polynomials on the CPU before executing SHT on the GPU within the time integration loop. In the second approach, both the Legendre polynomials and the SHT are computed on the GPU simultaneously. In the third approach , we initially partition the radial grid for the forward transform and the harmonic order for the backward transform between the CPU and GPU. There after, the partitioned works are simultaneously computed in the time integration loop. We examine the trade-offs between space and time, memory bandwidth and GPU computations on Maverick, a Texas Advanced Computing Center (TACC) supercomputer. We have observed improved performance using a GPU enabled Legendre transform. Furthermore, we will compare and contrast the different algorithms in the context of GPUs.
Design of coded aperture arrays by means of a global optimization algorithm
NASA Astrophysics Data System (ADS)
Lang, Haitao; Liu, Liren; Yang, Qingguo
2006-08-01
Coded aperture imaging (CAI) has evolved as a standard technique for imaging high energy photon sources and has found numerous applications. Coded aperture arrays (CAAs) are the most important devices in the applications of CAI. In recent years, many approaches were presented to design optimum or near-optimum CAAs. Uniformly redundant arrays (URAs) are the most successful CAAs for their cyclic autocorrelation consisting of a sequence of delta functions on a flat sidelobe which can easily be subtracted when the object has been reconstructed. Unfortunately, the existing methods can only be used to design URAs with limited number of array sizes and fixed autocorrelative sidelobe-to-peak ratio. In this paper, we presented a method to design more flexible URAs by means of a global optimization algorithm named DIRECT. By our approaches, we obtain various types of URAs including the filled URAs which can be constructed by existing methods and the sparse URAs which never be constructed and mentioned by existing papers as far as we know.
Chen, Xinhua; Zhou, Jiankang; Shen, Weimin
2016-09-05
Wavefront coding system can realize defocus invariance of PSF/OTF with a phase mask inserting in the pupil plane. Ideally, the derivative of the PSF/OTF with respect to defocus error should be close to zero as much as possible over the extended depth of field/focus for the wavefront coding system. In this paper, we propose an analytical expression for the computation of the derivative of PSF. With this expression, the derivative of PSF based merit function can be used in the optimization of the wavefront coding system with any type of phase mask and aberrations. Computation of the derivative of PSF using the proposed expression and FFT respectively are compared and discussed. We also demonstrate the optimization of a generic polynomial phase mask in wavefront coding system as an example.
NASA Technical Reports Server (NTRS)
Lahti, G. P.
1972-01-01
A two- or three-constraint, two-dimensional radiation shield weight optimization procedure and a computer program, DOPEX, is described. The DOPEX code uses the steepest descent method to alter a set of initial (input) thicknesses for a shield configuration to achieve a minimum weight while simultaneously satisfying dose constaints. The code assumes an exponential dose-shield thickness relation with parameters specified by the user. The code also assumes that dose rates in each principal direction are dependent only on thicknesses in that direction. Code input instructions, FORTRAN 4 listing, and a sample problem are given. Typical computer time required to optimize a seven-layer shield is about 0.1 minute on an IBM 7094-2.
NASA Astrophysics Data System (ADS)
Gather, Malte C.; Yun, Seok Hyun
2014-12-01
Bioluminescent organisms are likely to have an evolutionary drive towards high radiance. As such, bio-optimized materials derived from them hold great promise for photonic applications. Here, we show that biologically produced fluorescent proteins retain their high brightness even at the maximum density in solid state through a special molecular structure that provides optimal balance between high protein concentration and low resonance energy transfer self-quenching. Dried films of green fluorescent protein show low fluorescence quenching (-7 dB) and support strong optical amplification (gnet=22 cm-1 96 dB cm-1). Using these properties, we demonstrate vertical cavity surface emitting micro-lasers with low threshold (<100 pJ, outperforming organic semiconductor lasers) and self-assembled all-protein ring lasers. Moreover, solid-state blends of different proteins support efficient Förster resonance energy transfer, with sensitivity to intermolecular distance thus allowing all-optical sensing. The design of fluorescent proteins may be exploited for bio-inspired solid-state luminescent molecules or nanoparticles.
Gather, Malte C.; Yun, Seok Hyun
2015-01-01
Bioluminescent organisms are likely to have an evolutionary drive towards high radiance. As such, bio-optimized materials derived from them hold great promise for photonic applications. Here we show that biologically produced fluorescent proteins retain their high brightness even at the maximum density in solid state through a special molecular structure that provides optimal balance between high protein concentration and low resonance energy transfer self-quenching. Dried films of green fluorescent protein show low fluorescence quenching (−7 dB) and support strong optical amplification (gnet = 22 cm−1; 96 dB cm−1). Using these properties, we demonstrate vertical cavity surface emitting micro-lasers with low threshold (<100 pJ, outperforming organic semiconductor lasers) and self-assembled all-protein ring lasers. Moreover, solid-state blends of different proteins support efficient Förster resonance energy transfer, with sensitivity to intermolecular distance thus allowing all-optical sensing. The design of fluorescent proteins may be exploited for bio-inspired solid-state luminescent molecules or nanoparticles. PMID:25483850
Gather, Malte C; Yun, Seok Hyun
2014-12-08
Bioluminescent organisms are likely to have an evolutionary drive towards high radiance. As such, bio-optimized materials derived from them hold great promise for photonic applications. Here, we show that biologically produced fluorescent proteins retain their high brightness even at the maximum density in solid state through a special molecular structure that provides optimal balance between high protein concentration and low resonance energy transfer self-quenching. Dried films of green fluorescent protein show low fluorescence quenching (-7 dB) and support strong optical amplification (gnet=22 cm(-1); 96 dB cm(-1)). Using these properties, we demonstrate vertical cavity surface emitting micro-lasers with low threshold (<100 pJ, outperforming organic semiconductor lasers) and self-assembled all-protein ring lasers. Moreover, solid-state blends of different proteins support efficient Förster resonance energy transfer, with sensitivity to intermolecular distance thus allowing all-optical sensing. The design of fluorescent proteins may be exploited for bio-inspired solid-state luminescent molecules or nanoparticles.
Selecting a proper design period for heliostat field layout optimization using Campo code
NASA Astrophysics Data System (ADS)
Saghafifar, Mohammad; Gadalla, Mohamed
2016-09-01
In this paper, different approaches are considered to calculate the cosine factor which is utilized in Campo code to expand the heliostat field layout and maximize its annual thermal output. Furthermore, three heliostat fields containing different number of mirrors are taken into consideration. Cosine factor is determined by considering instantaneous and time-average approaches. For instantaneous method, different design days and design hours are selected. For the time average method, daily time average, monthly time average, seasonally time average, and yearly time averaged cosine factor determinations are considered. Results indicate that instantaneous methods are more appropriate for small scale heliostat field optimization. Consequently, it is proposed to consider the design period as the second design variable to ensure the best outcome. For medium and large scale heliostat fields, selecting an appropriate design period is more important. Therefore, it is more reliable to select one of the recommended time average methods to optimize the field layout. Optimum annual weighted efficiency for heliostat fields (small, medium, and large) containing 350, 1460, and 3450 mirrors are 66.14%, 60.87%, and 54.04%, respectively.
Stereoscopic Visual Attention-Based Regional Bit Allocation Optimization for Multiview Video Coding
NASA Astrophysics Data System (ADS)
Zhang, Yun; Jiang, Gangyi; Yu, Mei; Chen, Ken; Dai, Qionghai
2010-12-01
We propose a Stereoscopic Visual Attention- (SVA-) based regional bit allocation optimization for Multiview Video Coding (MVC) by the exploiting visual redundancies from human perceptions. We propose a novel SVA model, where multiple perceptual stimuli including depth, motion, intensity, color, and orientation contrast are utilized, to simulate the visual attention mechanisms of human visual system with stereoscopic perception. Then, a semantic region-of-interest (ROI) is extracted based on the saliency maps of SVA. Both objective and subjective evaluations of extracted ROIs indicated that the proposed SVA model based on ROI extraction scheme outperforms the schemes only using spatial or/and temporal visual attention clues. Finally, by using the extracted SVA-based ROIs, a regional bit allocation optimization scheme is presented to allocate more bits on SVA-based ROIs for high image quality and fewer bits on background regions for efficient compression purpose. Experimental results on MVC show that the proposed regional bit allocation algorithm can achieve over [InlineEquation not available: see fulltext.]% bit-rate saving while maintaining the subjective image quality. Meanwhile, the image quality of ROIs is improved by [InlineEquation not available: see fulltext.] dB at the cost of insensitive image quality degradation of the background image.
NASA Astrophysics Data System (ADS)
Jiang, Shanhu; Ren, Liliang; Hong, Yang; Yong, Bin; Yang, Xiaoli; Yuan, Fei; Ma, Mingwei
2012-07-01
SummaryThis study first focuses on comprehensive evaluating three widely used satellite precipitation products (TMPA 3B42V6, TMPA 3B42RT, and CMORPH) with a dense rain gauge network in the Mishui basin (9972 km2) in South China and then optimally merge their simulated hydrologic flows with the semi-distributed Xinanjiang model using the Bayesian model averaging method. The initial satellite precipitation data comparisons show that the reanalyzed 3B42V6, with a bias of -4.54%, matched best with the rain gauge observations, while the two near real-time satellite datasets (3B42RT and CMORPH) largely underestimated precipitation by 42.72% and 40.81% respectively. With the model parameters first benchmarked by the rain gauge data, the behavior of the streamflow simulation from the 3B42V6 was also the most optimal amongst the three products, while the two near real-time satellite datasets produced deteriorated biases and Nash-Sutcliffe coefficients (NSCEs). Still, when the model parameters were recalibrated by each individual satellite data, the performance of the streamflow simulations from the two near real-time satellite products were significantly improved, thus demonstrating the need for specific calibrations of the hydrological models for the near real-time satellite inputs. Moreover, when optimally merged with respect to the streamflows forced by the two near real-time satellite precipitation products and all the three satellite precipitation products using the Bayesian model averaging method, the resulted streamflow series further improved and became more robust. In summary, the three current state-of-the-art satellite precipitation products have demonstrated potential in hydrological research and applications. The benchmarking, recalibration, and optimal merging schemes for streamflow simulation at a basin scale described in the present work will hopefully be a reference for future utilizations of satellite precipitation products in global and regional
NASA Astrophysics Data System (ADS)
Cui, Laizhong; Jiang, Yong; Wu, Jianping; Xia, Shutao
Most large-scale Peer-to-Peer (P2P) live streaming systems are constructed as a mesh structure, which can provide robustness in the dynamic P2P environment. The pull scheduling algorithm is widely used in this mesh structure, which degrades the performance of the entire system. Recently, network coding was introduced in mesh P2P streaming systems to improve the performance, which makes the push strategy feasible. One of the most famous scheduling algorithms based on network coding is R2, with a random push strategy. Although R2 has achieved some success, the push scheduling strategy still lacks a theoretical model and optimal solution. In this paper, we propose a novel optimal pull-push scheduling algorithm based on network coding, which consists of two stages: the initial pull stage and the push stage. The main contributions of this paper are: 1) we put forward a theoretical analysis model that considers the scarcity and timeliness of segments; 2) we formulate the push scheduling problem to be a global optimization problem and decompose it into local optimization problems on individual peers; 3) we introduce some rules to transform the local optimization problem into a classical min-cost optimization problem for solving it; 4) We combine the pull strategy with the push strategy and systematically realize our scheduling algorithm. Simulation results demonstrate that decode delay, decode ratio and redundant fraction of the P2P streaming system with our algorithm can be significantly improved, without losing throughput and increasing overhead.
Unbalanced Multiple-Description Video Coding with Rate-Distortion Optimization
NASA Astrophysics Data System (ADS)
Comas, David; Singh, Raghavendra; Ortega, Antonio; Marqués, Ferran
2003-12-01
We propose to use multiple-description coding (MDC) to protect video information against packet losses and delay, while also ensuring that it can be decoded using a standard decoder. Video data are encoded into a high-resolution stream using a standard compliant encoder. In addition, a low-resolution stream is generated by duplicating the relevant information (motion vectors, headers and some of the DCT coefficient) from the high-resolution stream while the remaining coefficients are set to zero. Both streams are independently decodable by a standard decoder. However, only in case of losses in the high resolution description, the corresponding information from the low resolution stream is decoded, else the received high resolution description is decoded. The main contribution of this paper is an optimization algorithm which, given the loss ratio, allocates bits to both descriptions and selects the right number of coefficients to duplicate in the low-resolution stream so as to minimize the expected distortion at the decoder end.
Code to Optimize Load Sharing of Split-Torque Transmissions Applied to the Comanche Helicopter
NASA Technical Reports Server (NTRS)
1995-01-01
Most helicopters now in service have a transmission with a planetary design. Studies have shown that some helicopters would be lighter and more reliable if they had a transmission with a split-torque design instead. However, a split-torque design has never been used by a U.S. helicopter manufacturer because there has been no proven method to ensure equal sharing of the load among the multiple load paths. The Sikorsky/Boeing team has chosen to use a split-torque transmission for the U.S. Army's Comanche helicopter, and Sikorsky Aircraft is designing and manufacturing the transmission. To help reduce the technical risk of fielding this helicopter, NASA and the Army have done the research jointly in cooperation with Sikorsky Aircraft. A theory was developed that equal load sharing could be achieved by proper configuration of the geartrain, and a computer code was completed in-house at the NASA Lewis Research Center to calculate this optimal configuration.
Optimization of the Monte Carlo code for modeling of photon migration in tissue.
Zołek, Norbert S; Liebert, Adam; Maniewski, Roman
2006-10-01
The Monte Carlo method is frequently used to simulate light transport in turbid media because of its simplicity and flexibility, allowing to analyze complicated geometrical structures. Monte Carlo simulations are, however, time consuming because of the necessity to track the paths of individual photons. The time consuming computation is mainly associated with the calculation of the logarithmic and trigonometric functions as well as the generation of pseudo-random numbers. In this paper, the Monte Carlo algorithm was developed and optimized, by approximation of the logarithmic and trigonometric functions. The approximations were based on polynomial and rational functions, and the errors of these approximations are less than 1% of the values of the original functions. The proposed algorithm was verified by simulations of the time-resolved reflectance at several source-detector separations. The results of the calculation using the approximated algorithm were compared with those of the Monte Carlo simulations obtained with an exact computation of the logarithm and trigonometric functions as well as with the solution of the diffusion equation. The errors of the moments of the simulated distributions of times of flight of photons (total number of photons, mean time of flight and variance) are less than 2% for a range of optical properties, typical of living tissues. The proposed approximated algorithm allows to speed up the Monte Carlo simulations by a factor of 4. The developed code can be used on parallel machines, allowing for further acceleration.
Kurosu, K; Takashina, M; Koizumi, M; Das, I; Moskvin, V
2014-06-01
Purpose: Monte Carlo codes are becoming important tools for proton beam dosimetry. However, the relationships between the customizing parameters and percentage depth dose (PDD) of GATE and PHITS codes have not been reported which are studied for PDD and proton range compared to the FLUKA code and the experimental data. Methods: The beam delivery system of the Indiana University Health Proton Therapy Center was modeled for the uniform scanning beam in FLUKA and transferred identically into GATE and PHITS. This computational model was built from the blue print and validated with the commissioning data. Three parameters evaluated are the maximum step size, cut off energy and physical and transport model. The dependence of the PDDs on the customizing parameters was compared with the published results of previous studies. Results: The optimal parameters for the simulation of the whole beam delivery system were defined by referring to the calculation results obtained with each parameter. Although the PDDs from FLUKA and the experimental data show a good agreement, those of GATE and PHITS obtained with our optimal parameters show a minor discrepancy. The measured proton range R90 was 269.37 mm, compared to the calculated range of 269.63 mm, 268.96 mm, and 270.85 mm with FLUKA, GATE and PHITS, respectively. Conclusion: We evaluated the dependence of the results for PDDs obtained with GATE and PHITS Monte Carlo generalpurpose codes on the customizing parameters by using the whole computational model of the treatment nozzle. The optimal parameters for the simulation were then defined by referring to the calculation results. The physical model, particle transport mechanics and the different geometrybased descriptions need accurate customization in three simulation codes to agree with experimental data for artifact-free Monte Carlo simulation. This study was supported by Grants-in Aid for Cancer Research (H22-3rd Term Cancer Control-General-043) from the Ministry of Health
Goodarzi, Hani; Najafabadi, Hamed Shateri; Hassani, Kasra; Nejad, Hamed Ahmadi; Torabi, Noorossadat
2005-08-07
Statistical and biochemical studies have revealed non-random patterns in codon assignments. The canonical genetic code is known to be highly efficient in minimizing the effects of mistranslation errors and point mutations, since it is known that when an amino acid is converted to another due to error, the biochemical properties of the resulted amino acid are usually very similar to those of the original one. In this study, using altered forms of the fitness functions used in the prior studies, we have optimized the parameters involved in the calculation of the error minimizing property of the genetic code so that the genetic code outscores the random codes as much as possible. This work also compares two prominent matrices, the Mutation Matrix and Point Accepted Mutations 74-100 (PAM(74-100)). It has been resulted that the hypothetical properties of the coevolution theory of the genetic code are already considered in PAM(74-100), giving more evidence on the existence of bias towards the genetic code in this matrix. Furthermore, our results indicate that PAM(74-100) is biased towards the single base mistranslation occurrences in second codon position as well as the frequency of amino acids. Thus PAM(74-100) is not a suitable substitution matrix for the studies conducted on the evolution of the genetic code.
The Neural Code for Auditory Space Depends on Sound Frequency and Head Size in an Optimal Manner
Harper, Nicol S.; Scott, Brian H.; Semple, Malcolm N.; McAlpine, David
2014-01-01
A major cue to the location of a sound source is the interaural time difference (ITD)–the difference in sound arrival time at the two ears. The neural representation of this auditory cue is unresolved. The classic model of ITD coding, dominant for a half-century, posits that the distribution of best ITDs (the ITD evoking a neuron’s maximal response) is unimodal and largely within the range of ITDs permitted by head-size. This is often interpreted as a place code for source location. An alternative model, based on neurophysiology in small mammals, posits a bimodal distribution of best ITDs with exquisite sensitivity to ITDs generated by means of relative firing rates between the distributions. Recently, an optimal-coding model was proposed, unifying the disparate features of these two models under the framework of efficient coding by neural populations. The optimal-coding model predicts that distributions of best ITDs depend on head size and sound frequency: for high frequencies and large heads it resembles the classic model, for low frequencies and small head sizes it resembles the bimodal model. The optimal-coding model makes key, yet unobserved, predictions: for many species, including humans, both forms of neural representation are employed, depending on sound frequency. Furthermore, novel representations are predicted for intermediate frequencies. Here, we examine these predictions in neurophysiological data from five mammalian species: macaque, guinea pig, cat, gerbil and kangaroo rat. We present the first evidence supporting these untested predictions, and demonstrate that different representations appear to be employed at different sound frequencies in the same species. PMID:25372405
Fillit, Howard; Geldmacher, David S; Welter, Richard Todd; Maslow, Katie; Fraser, Malcolm
2002-11-01
The objectives of this study were to review the diagnostic, International Classification of Disease, 9th Revision, Clinical Modification (ICD-9-CM), diagnosis related groups (DRGs), and common procedural terminology (CPT) coding and reimbursement issues (including Medicare Part B reimbursement for physicians) encountered in caring for patients with Alzheimer's disease and related dementias (ADRD); to review the implications of these policies for the long-term clinical management of the patient with ADRD; and to provide recommendations for promoting appropriate recognition and reimbursement for clinical services provided to ADRD patients. Relevant English-language articles identified from MEDLINE about ADRD prevalence estimates; disease morbidity and mortality; diagnostic coding practices for ADRD; and Medicare, Medicaid, and managed care organization data on diagnostic coding and reimbursement were reviewed. Alzheimer's disease (AD) is grossly undercoded. Few AD cases are recognized at an early stage. Only 13% of a group of patients receiving the AD therapy donepezil had AD as the primary diagnosis, and AD is rarely included as a primary or secondary DRG diagnosis when the condition precipitating admission to the hospital is caused by AD. In addition, AD is often not mentioned on death certificates, although it may be the proximate cause of death. There is only one ICD-9-CM code for AD-331.0-and no clinical modification codes, despite numerous complications that can be directly attributed to AD. Medicare carriers consider ICD-9 codes for senile dementia (290 series) to be mental health codes and pay them at a lower rate than medical codes. DRG coding is biased against recognition of ADRD as an acute, admitting diagnosis. The CPT code system is an impediment to quality of care for ADRD patients because the complex, time-intensive services ADRD patients require are not adequately, if at all, reimbursed. Also, physicians treating significant numbers of AD patients are
Insertion of operation-and-indicate instructions for optimized SIMD code
Eichenberger, Alexander E; Gara, Alan; Gschwind, Michael K
2013-06-04
Mechanisms are provided for inserting indicated instructions for tracking and indicating exceptions in the execution of vectorized code. A portion of first code is received for compilation. The portion of first code is analyzed to identify non-speculative instructions performing designated non-speculative operations in the first code that are candidates for replacement by replacement operation-and-indicate instructions that perform the designated non-speculative operations and further perform an indication operation for indicating any exception conditions corresponding to special exception values present in vector register inputs to the replacement operation-and-indicate instructions. The replacement is performed and second code is generated based on the replacement of the at least one non-speculative instruction. The data processing system executing the compiled code is configured to store special exception values in vector output registers, in response to a speculative instruction generating an exception condition, without initiating exception handling.
Code-Switching and the Optimal Grammar of Bilingual Language Use
ERIC Educational Resources Information Center
Bhatt, Rakesh M.; Bolonyai, Agnes
2011-01-01
In this article, we provide a framework of bilingual grammar that offers a theoretical understanding of the socio-cognitive bases of code-switching in terms of five general principles that, individually or through interaction with each other, explain how and why specific instances of code-switching arise. We provide cross-linguistic empirical…
Peter Cebull
2004-05-01
The Attila radiation transport code, which solves the Boltzmann neutron transport equation on three-dimensional unstructured tetrahedral meshes, was ported to a Cray SV1. Cray's performance analysis tools pointed to two subroutines that together accounted for 80%-90% of the total CPU time. Source code modifications were performed to enable vectorization of the most significant loops, to correct unfavorable strides through memory, and to replace a conjugate gradient solver subroutine with a call to the Cray Scientific Library. These optimizations resulted in a speedup of 7.79 for the INEEL's largest ATR model. Parallel scalability of the OpenMP version of the code is also discussed, and timing results are given for other non-vector platforms.
Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.
2012-01-01
An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.
GPU-optimized Code for Long-term Simulations of Beam-beam Effects in Colliders
Roblin, Yves; Morozov, Vasiliy; Terzic, Balsa; Aturban, Mohamed A.; Ranjan, D.; Zubair, Mohammed
2013-06-01
We report on the development of the new code for long-term simulation of beam-beam effects in particle colliders. The underlying physical model relies on a matrix-based arbitrary-order symplectic particle tracking for beam transport and the Bassetti-Erskine approximation for beam-beam interaction. The computations are accelerated through a parallel implementation on a hybrid GPU/CPU platform. With the new code, a previously computationally prohibitive long-term simulations become tractable. We use the new code to model the proposed medium-energy electron-ion collider (MEIC) at Jefferson Lab.
NASA Astrophysics Data System (ADS)
Kurosu, Keita; Das, Indra J.; Moskvin, Vadim P.
2016-01-01
Spot scanning, owing to its superior dose-shaping capability, provides unsurpassed dose conformity, in particular for complex targets. However, the robustness of the delivered dose distribution and prescription has to be verified. Monte Carlo (MC) simulation has the potential to generate significant advantages for high-precise particle therapy, especially for medium containing inhomogeneities. However, the inherent choice of computational parameters in MC simulation codes of GATE, PHITS and FLUKA that is observed for uniform scanning proton beam needs to be evaluated. This means that the relationship between the effect of input parameters and the calculation results should be carefully scrutinized. The objective of this study was, therefore, to determine the optimal parameters for the spot scanning proton beam for both GATE and PHITS codes by using data from FLUKA simulation as a reference. The proton beam scanning system of the Indiana University Health Proton Therapy Center was modeled in FLUKA, and the geometry was subsequently and identically transferred to GATE and PHITS. Although the beam transport is managed by spot scanning system, the spot location is always set at the center of a water phantom of 600 × 600 × 300 mm3, which is placed after the treatment nozzle. The percentage depth dose (PDD) is computed along the central axis using 0.5 × 0.5 × 0.5 mm3 voxels in the water phantom. The PDDs and the proton ranges obtained with several computational parameters are then compared to those of FLUKA, and optimal parameters are determined from the accuracy of the proton range, suppressed dose deviation, and computational time minimization. Our results indicate that the optimized parameters are different from those for uniform scanning, suggesting that the gold standard for setting computational parameters for any proton therapy application cannot be determined consistently since the impact of setting parameters depends on the proton irradiation technique. We
Rudigoz, René-Charles; Huissoud, Cyril; Delecour, Lisa; Thevenet, Simone; Dupont, Corinne
2014-06-01
The medical team of the Croix Rousse teaching hospital maternity unit has developed, over the last ten years, a set of procedures designed to respond to various emergency situations necessitating Caesarean section. Using the Lucas classification, we have defined as precisely as possible the degree of urgency of Caesarian sections. We have established specific protocols for the implementation of urgent and very urgent Caesarean section and have chosen a simple means to convey the degree of urgency to all team members, namely a color code system (red, orange and green). We have set time goals from decision to delivery: 15 minutes for the red code and 30 minutes for the orange code. The results seem very positive: The frequency of urgent and very urgent Caesareans has fallen over time, from 6.1 % to 1.6% in 2013. The average time from decision to delivery is 11 minutes for code red Caesareans and 21 minutes for code orange Caesareans. These time goals are now achieved in 95% of cases. Organizational and anesthetic difficulties are the main causes of delays. The indications for red and orange code Caesarians are appropriate more than two times out of three. Perinatal outcomes are generally favorable, code red Caesarians being life-saving in 15% of cases. No increase in maternal complications has been observed. In sum: Each obstetric department should have its own protocols for handling urgent and very urgent Caesarean sections. Continuous monitoring of their implementation, relevance and results should be conducted Management of extreme urgency must be integrated into the management of patients with identified risks (scarred uterus and twin pregnancies for example), and also in structures without medical facilities (birthing centers). Obstetric teams must keep in mind that implementation of these protocols in no way dispenses with close monitoring of labour.
NASA Astrophysics Data System (ADS)
Darazi, R.; Gouze, A.; Macq, B.
2009-01-01
Reproducing a natural and real scene as we see in the real world everyday is becoming more and more popular. Stereoscopic and multi-view techniques are used for this end. However due to the fact that more information are displayed requires supporting technologies such as digital compression to ensure the storage and transmission of the sequences. In this paper, a new scheme for stereo image coding is proposed. The original left and right images are jointly coded. The main idea is to optimally exploit the existing correlation between the two images. This is done by the design of an efficient transform that reduces the existing redundancy in the stereo image pair. This approach was inspired by Lifting Scheme (LS). The novelty in our work is that the prediction step is been replaced by an hybrid step that consists in disparity compensation followed by luminance correction and an optimized prediction step. The proposed scheme can be used for lossless and for lossy coding. Experimental results show improvement in terms of performance and complexity compared to recently proposed methods.
Optimized sign language video coding based on eye-tracking analysis
NASA Astrophysics Data System (ADS)
Agrafiotis, Dimitris; Canagarajah, C. N.; Bull, David R.; Dye, Matt; Twyford, Helen; Kyle, Jim; Chung How, James
2003-06-01
The imminent arrival of mobile video telephony will enable deaf people to communicate - as hearing people have been able to do for a some time now - anytime/anywhere in their own language sign language. At low bit rates coding of sign language sequences is very challenging due to the high level of motion and the need to maintain good image quality to aid with understanding. This paper presents optimised coding of sign language video at low bit rates in a way that will favour comprehension of the compressed material by deaf users. Our coding suggestions are based on an eye-tracking study that we have conducted which allows us to analyse the visual attention of sign language viewers. The results of this study are included in this paper. Analysis and results for two coding methods, one using MPEG-4 video objects and the second using foveation filtering are presented. Results with foveation filtering are very promising, offering a considerable decrease in bit rate in a way which is compatible with the visual attention patterns of deaf people, as these were recorded in the eye tracking study.
DENSE MEDIUM CYCLONE OPTIMIZATON
Gerald H. Luttrell; Chris J. Barbee; Peter J. Bethell; Chris J. Wood
2005-06-30
Dense medium cyclones (DMCs) are known to be efficient, high-tonnage devices suitable for upgrading particles in the 50 to 0.5 mm size range. This versatile separator, which uses centrifugal forces to enhance the separation of fine particles that cannot be upgraded in static dense medium separators, can be found in most modern coal plants and in a variety of mineral plants treating iron ore, dolomite, diamonds, potash and lead-zinc ores. Due to the high tonnage, a small increase in DMC efficiency can have a large impact on plant profitability. Unfortunately, the knowledge base required to properly design and operate DMCs has been seriously eroded during the past several decades. In an attempt to correct this problem, a set of engineering tools have been developed to allow producers to improve the efficiency of their DMC circuits. These tools include (1) low-cost density tracers that can be used by plant operators to rapidly assess DMC performance, (2) mathematical process models that can be used to predict the influence of changes in operating and design variables on DMC performance, and (3) an expert advisor system that provides plant operators with a user-friendly interface for evaluating, optimizing and trouble-shooting DMC circuits. The field data required to develop these tools was collected by conducting detailed sampling and evaluation programs at several industrial plant sites. These data were used to demonstrate the technical, economic and environmental benefits that can be realized through the application of these engineering tools.
NASA Technical Reports Server (NTRS)
Reichel, R. H.; Hague, D. S.; Jones, R. T.; Glatt, C. R.
1973-01-01
This computer program manual describes in two parts the automated combustor design optimization code AUTOCOM. The program code is written in the FORTRAN 4 language. The input data setup and the program outputs are described, and a sample engine case is discussed. The program structure and programming techniques are also described, along with AUTOCOM program analysis.
A study of the optimization method used in the NAVY/NASA gas turbine engine computer code
NASA Technical Reports Server (NTRS)
Horsewood, J. L.; Pines, S.
1977-01-01
Sources of numerical noise affecting the convergence properties of the Powell's Principal Axis Method of Optimization in the NAVY/NASA gas turbine engine computer code were investigated. The principal noise source discovered resulted from loose input tolerances used in terminating iterations performed in subroutine CALCFX to satisfy specified control functions. A minor source of noise was found to be introduced by an insufficient number of digits in stored coefficients used by subroutine THERM in polynomial expressions of thermodynamic properties. Tabular results of several computer runs are presented to show the effects on program performance of selective corrective actions taken to reduce noise.
Decoding and optimized implementation of SECDED codes over GF(q)
Ward, H. Lee; Ganti, Anand; Resnick, David R
2013-10-22
A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.
Design, decoding and optimized implementation of SECDED codes over GF(q)
Ward, H Lee; Ganti, Anand; Resnick, David R
2014-06-17
A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.
Decoding and optimized implementation of SECDED codes over GF(q)
Ward, H Lee; Ganti, Anand; Resnick, David R
2014-11-18
A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.
Aspnäs, Mats; Mattila, Kimmo; Osowski, Kristoffer; Westerholm, Jan
2010-06-01
A central task in protein sequence characterization is the use of a sequence database homology search tool to find similar protein sequences in other individuals or species. PSI-BLAST is a widely used module of the BLAST package that calculates a position-specific score matrix from the best matching sequences and performs iterated searches using a method to avoid many similar sequences for the score. For some queries and parameter settings, PSI-BLAST may find many similar high-scoring matches, and therefore up to 80% of the total run time may be spent in this procedure. In this article, we present code optimizations that improve the cache utilization and the overall performance of this procedure. Measurements show that, for queries where the number of similar matches is high, the optimized PSI-BLAST program may be as much as 2.9 times faster than the original program.
Hu, S X
2010-05-01
To efficiently solve the three-dimensional (3D) time-dependent linear and nonlinear Schrödinger equation, we have developed a large-scale parallel code RSP-FEDVR [B. I. Schneider, L. A. Collins, and S. X. Hu, Phys. Rev. E 73, 036708 (2006)], which combines the finite-element discrete variable representation (FEDVR) with the real-space product algorithm. Using the similar algorithm, we have derived an accurate approach to solve the time-dependent close-coupling (TDCC) equation for exploring two-electron dynamics in linearly polarized intense laser pulses. However, when the number (N) of partial waves used for the TDCC expansion increases, the FEDVR-TDCC code unfortunately slows down, because the potential-matrix operation scales as ∼O(N2) . In this paper, we show that the full potential-matrix operation can be decomposed into a series of small-matrix operations utilizing the sparse property of the [N×N] potential matrix. Such optimization speeds up the FEDVR-TDCC code by an order of magnitude for N=256 . This may facilitate the ultimate solution to the 3D two-electron quantum dynamics in ultrashort intense optical laser pulses, where a large number of partial waves are required.
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.; Fales, Carl L.
1990-01-01
Researchers are concerned with the end-to-end performance of image gathering, coding, and processing. The applications range from high-resolution television to vision-based robotics, wherever the resolution, efficiency and robustness of visual information acquisition and processing are critical. For the presentation at this workshop, it is convenient to divide research activities into the following two overlapping areas: The first is the development of focal-plane processing techniques and technology to effectively combine image gathering with coding, with an emphasis on low-level vision processing akin to the retinal processing in human vision. The approach includes the familiar Laplacian pyramid, the new intensity-dependent spatial summation, and parallel sensing/processing networks. Three-dimensional image gathering is attained by combining laser ranging with sensor-array imaging. The second is the rigorous extension of information theory and optimal filtering to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing.
Lee, Chaewoo
2014-01-01
The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm. PMID:25276862
Lee, Dongyul; Lee, Chaewoo
2014-01-01
The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.
Optimization of a photoneutron source based on 10 MeV electron beam using Geant4 Monte Carlo code
NASA Astrophysics Data System (ADS)
Askri, Boubaker
2015-10-01
Geant4 Monte Carlo code has been used to conceive and optimize a simple and compact neutron source based on a 10 MeV electron beam impinging on a tungsten target adjoined to a beryllium target. For this purpose, a precise photonuclear reaction cross-section model issued from the International Atomic Energy Agency (IAEA) database was linked to Geant4 to accurately simulate the interaction of low energy bremsstrahlung photons with beryllium material. A benchmark test showed that a good agreement was achieved when comparing the emitted neutron flux spectra predicted by Geant4 and Fluka codes for a beryllium cylinder bombarded with a 5 MeV photon beam. The source optimization was achieved through a two stage Monte Carlo simulation. In the first stage, the distributions of the seven phase space coordinates of the bremsstrahlung photons at the boundaries of the tungsten target were determined. In the second stage events corresponding to photons emitted according to these distributions were tracked. A neutron yield of 4.8 × 1010 neutrons/mA/s was obtained at 20 cm from the beryllium target. A thermal neutron yield of 1.5 × 109 neutrons/mA/s was obtained after introducing a spherical shell of polyethylene as a neutron moderator.
Performance of an Optimized Eta Model Code on the Cray T3E and a Network of PCs
NASA Technical Reports Server (NTRS)
Kouatchou, Jules; Rancic, Miodrag; Geiger, Jim
2000-01-01
In the year 2001, NASA will launch the satellite TRIANA that will be the first Earth observing mission to provide a continuous, full disk view of the sunlit Earth. As a part of the HPCC Program at NASA GSFC, we have started a project whose objectives are to develop and implement a 3D cloud data assimilation system, by combining TRIANA measurements with model simulation, and to produce accurate statistics of global cloud coverage as an important element of the Earth's climate. For simulation of the atmosphere within this project we are using the NCEP/NOAA operational Eta model. In order to compare TRIANA and the Eta model data on approximately the same grid without significant downscaling, the Eta model will be integrated at a resolution of about 15 km. The integration domain (from -70 to +70 deg in latitude and 150 deg in longitude) will cover most of the sunlit Earth disc and will continuously rotate around the globe following TRIANA. The cloud data assimilation is supposed to run and produce 3D clouds on a near real-time basis. Such a numerical setup and integration design is very ambitious and computationally demanding. Thus, though the Eta model code has been very carefully developed and its computational efficiency has been systematically polished during the years of operational implementation at NCEP, the current MPI version may still have problems with memory and efficiency for the TRIANA simulations. Within this work, we optimize a parallel version of the Eta model code on a Cray T3E and a network of PCs (theHIVE) in order to improve its overall efficiency. Our optimization procedure consists of introducing dynamically allocated arrays to reduce the size of static memory, and optimizing on a single processor by splitting loops to limit the number of streams. All the presented results are derived using an integration domain centered at the equator, with a size of 60 x 60 deg, and with horizontal resolutions of 1/2 and 1/3 deg, respectively. In accompanying
Code Optimization for the Choi-Williams Distribution for ELINT Applications
2009-12-01
Applied Mathematics Series-55, Issued June 1964, Seventh Printing, May 1968, with corrections. [13] Oppenheim & Schafer, Digital Signal Processing ... Phillip E. Pace i REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is...PAGES 98 14. SUBJECT TERMS Choi-Williams Distribution, Signal Processing , Algorithm Optimization, C programming, Low Probability of Intercept (LPI
Dense topological spaces and dense continuity
NASA Astrophysics Data System (ADS)
Aldwoah, Khaled A.
2013-09-01
There are several attempts to generalize (or "widen") the concept of topological space. This paper uses equivalence relations to generalize the concept of topological space via the concept of equivalence relations. By the generalization, we can introduce from particular topology on a nonempty set X many new topologies, we call anyone of these new topologies a dense topology. In addition, we formulate some simple properties of dense topologies and study suitable generalizations of the concepts of limit points, closeness and continuity, as well as Jackson, Nörlund and Hahn dense topologies.
Zhao, Hui; Li, Yingcai
2010-01-10
In two papers [Proc. SPIE 4471, 272-280 (2001) and Appl. Opt. 43, 2709-2721 (2004)], a logarithmic phase mask was proposed and proved to be effective in extending the depth of field; however, according to our research, this mask is not that perfect because the corresponding defocused modulation transfer function has large oscillations in the low-frequency region, even when the mask is optimized. So, in a previously published paper [Opt. Lett. 33, 1171-1173 (2008)], we proposed an improved logarithmic phase mask by making a small modification. The new mask can not only eliminate the drawbacks to a certain extent but can also be even less sensitive to focus errors according to Fisher information criteria. However, the performance comparison was carried out with the modified mask not being optimized, which was not reasonable. In this manuscript, we optimize the modified logarithmic phase mask first before analyzing its performance and more convincing results have been obtained based on the analysis of several frequently used metrics.
MagRad: A code to optimize the operation of superconducting magnets in a radiation environment
Yeaw, Christopher T.
1995-01-01
A powerful computational tool, called MagRad, has been developed which optimizes magnet design for operation in radiation fields. Specifically, MagRad has been used for the analysis and design modification of the cable-in-conduit conductors of the TF magnet systems in fusion reactor designs. Since the TF magnets must operate in a radiation environment which damages the material components of the conductor and degrades their performance, the optimization of conductor design must account not only for start-up magnet performance, but also shut-down performance. The degradation in performance consists primarily of three effects: reduced stability margin of the conductor; a transition out of the well-cooled operating regime; and an increased maximum quench temperature attained in the conductor. Full analysis of the magnet performance over the lifetime of the reactor includes: radiation damage to the conductor, stability, protection, steady state heat removal, shielding effectiveness, optimal annealing schedules, and finally costing of the magnet and reactor. Free variables include primary and secondary conductor geometric and compositional parameters, as well as fusion reactor parameters. A means of dealing with the radiation damage to the conductor, namely high temperature superconductor anneals, is proposed, examined, and demonstrated to be both technically feasible and cost effective. Additionally, two relevant reactor designs (ITER CDA and ARIES-II/IV) have been analyzed. Upon addition of pure copper strands to the cable, the ITER CDA TF magnet design was found to be marginally acceptable, although much room for both performance improvement and cost reduction exists. A cost reduction of 10-15% of the capital cost of the reactor can be achieved by adopting a suitable superconductor annealing schedule. In both of these reactor analyses, the performance predictive capability of MagRad and its associated costing techniques have been demonstrated.
Optimal rate control for video coding based on a hybrid MMAX/MMSE criterion
NASA Astrophysics Data System (ADS)
Lee, Sang-Yong; Ortega, Antonio
2003-05-01
In this paper, we consider the problem of rate control for video transmission. We focus on finding off-line optimal rate control for constant bit-rate (CBR) transmission, where the size of the encoder buffer and the channel rate are the constraints. To ensure a maximum minimum quality is obtained over all data units (e.g., macro blocks, video frames or group-of-pictures), we use a minimum maximum distortion (MMAX) criterion for this buffer-constrained problem. We show that, due to the buffer constraints, a MMAX solution leads to a relatively low average distortion, because the total rate budget is not completely utilized. Therefore, after finding a MMAX solution, an additional minimization of average distortion criterion is proposed to increase overall quality of the data sequence by using remaining resources. The proposed algorithm (denoted MMAX+ as it incorporates both MMAX and the additional average quality optimization stage) leads to an increase in average quality with respect to the MMAX solution, while providing a much more constant quality than MMSE solutions. Moreover we show how the MMAX+ approach can be implemented with low complexity.
Optimized multilevel codebook searching algorithm for vector quantization in image coding
NASA Astrophysics Data System (ADS)
Cao, Hugh Q.; Li, Weiping
1996-02-01
An optimized multi-level codebook searching algorithm (MCS) for vector quantization is presented in this paper. Although it belongs to the category of the fast nearest neighbor searching (FNNS) algorithms for vector quantization, the MCS algorithm is not a variation of any existing FNNS algorithms (such as k-d tree searching algorithm, partial-distance searching algorithm, triangle inequality searching algorithm...). A multi-level search theory has been introduced. The problem for the implementation of this theory has been solved by a specially defined irregular tree structure which can be built from a training set. This irregular tree structure is different from any tree structures used in TSVQ, prune tree VQ, quad tree VQ... Strictly speaking, it cannot be called tree structure since it allows one node has more than one set of parents, it is only a directed graph. This is the essential difference between MCS algorithm and other TSVQ algorithms which ensures its better performance. An efficient design procedure has been given to find the optimized irregular tree for practical source. The simulation results of applying MCS algorithm to image VQ show that this algorithm can reduce searching complexity to less than 3% of the exhaustive search vector quantization (ESVQ) (4096 codevectors and 16 dimension) while introducing negligible error (0.064 dB degradation from ESVQ). Simulation results also show that the searching complexity is close linearly increase with bitrate.
Nygaard, E. T.; Pain, C. C.; Eaton, M. D.; Gomes, J. L. M. A.; Goddard, A. J. H.; Gorman, G.; Tollit, B.; Buchan, A. G.; Cooling, C. M.; Angelo, P. L.
2012-07-01
Babcock and Wilcox Technical Services Group (B and W) has identified aqueous homogeneous reactors (AHRs) as a technology well suited to produce the medical isotope molybdenum 99 (Mo-99). AHRs have never been specifically designed or built for this specialized purpose. However, AHRs have a proven history of being safe research reactors. In fact, in 1958, AHRs had 'a longer history of operation than any other type of research reactor using enriched fuel' and had 'experimentally demonstrated to be among the safest of all various type of research reactor now in use [1].' While AHRs have been modeled effectively using simplified 'Level 1' tools, the complex interactions between fluids, neutronics, and solid structures are important (but not necessarily safety significant). These interactions require a 'Level 2' modeling tool. Imperial College London (ICL) has developed such a tool: Finite Element Transient Criticality (FETCH). FETCH couples the radiation transport code EVENT with the computational fluid dynamics code (Fluidity), the result is a code capable of modeling sub-critical, critical, and super-critical solutions in both two-and three-dimensions. Using FETCH, ICL researchers and B and W engineers have studied many fissioning solution systems include the Tokaimura criticality accident, the Y12 accident, SILENE, TRACY, and SUPO. These modeling efforts will ultimately be incorporated into FETCH'S extensive automated verification and validation (V and V) test suite expanding FETCH'S area of applicability to include all relevant physics associated with AHRs. These efforts parallel B and W's engineering effort to design and optimize an AHR to produce Mo99. (authors)
Optimal coding-decoding for systems controlled via a communication channel
NASA Astrophysics Data System (ADS)
Yi-wei, Feng; Guo, Ge
2013-12-01
In this article, we study the problem of controlling plants over a signal-to-noise ratio (SNR) constrained communication channel. Different from previous research, this article emphasises the importance of the actual channel model and coder/decoder in the study of network performance. Our major objectives include coder/decoder design for an additive white Gaussian noise (AWGN) channel with both standard network configuration and Youla parameter network architecture. We find that the optimal coder and decoder can be realised for different network configuration. The results are useful in determining the minimum channel capacity needed in order to stabilise plants over communication channels. The coder/decoder obtained can be used to analyse the effect of uncertainty on the channel capacity. An illustrative example is provided to show the effectiveness of the results.
NASA Technical Reports Server (NTRS)
Jenkins, R. M.
1983-01-01
The present effort represents an extension of previous work wherein a calculation model for performing rapid pitchline optimization of axial gas turbine geometry, including blade profiles, is developed. The model requires no specification of geometric constraints. Output includes aerodynamic performance (adiabatic efficiency), hub-tip flow-path geometry, blade chords, and estimates of blade shape. Presented herein is a verification of the aerodynamic performance portion of the model, whereby detailed turbine test-rig data, including rig geometry, is input to the model to determine whether tested performance can be predicted. An array of seven (7) NASA single-stage axial gas turbine configurations is investigated, ranging in size from 0.6 kg/s to 63.8 kg/s mass flow and in specific work output from 153 J/g to 558 J/g at design (hot) conditions; stage loading factor ranges from 1.15 to 4.66.
ROCOPT: A user friendly interactive code to optimize rocket structural components
NASA Technical Reports Server (NTRS)
Rule, William K.
1989-01-01
ROCOPT is a user-friendly, graphically-interfaced, microcomputer-based computer program (IBM compatible) that optimizes rocket components by minimizing the structural weight. The rocket components considered are ring stiffened truncated cones and cylinders. The applied loading is static, and can consist of any combination of internal or external pressure, axial force, bending moment, and torque. Stress margins are calculated by means of simple closed form strength of material type equations. Stability margins are determined by approximate, orthotropic-shell, closed-form equations. A modified form of Powell's method, in conjunction with a modified form of the external penalty method, is used to determine the minimum weight of the structure subject to stress and stability margin constraints, as well as user input constraints on the structural dimensions. The graphical interface guides the user through the required data prompts, explains program options and graphically displays results for easy interpretation.
BMI optimization by using parallel UNDX real-coded genetic algorithm with Beowulf cluster
NASA Astrophysics Data System (ADS)
Handa, Masaya; Kawanishi, Michihiro; Kanki, Hiroshi
2007-12-01
This paper deals with the global optimization algorithm of the Bilinear Matrix Inequalities (BMIs) based on the Unimodal Normal Distribution Crossover (UNDX) GA. First, analyzing the structure of the BMIs, the existence of the typical difficult structures is confirmed. Then, in order to improve the performance of algorithm, based on results of the problem structures analysis and consideration of BMIs characteristic properties, we proposed the algorithm using primary search direction with relaxed Linear Matrix Inequality (LMI) convex estimation. Moreover, in these algorithms, we propose two types of evaluation methods for GA individuals based on LMI calculation considering BMI characteristic properties more. In addition, in order to reduce computational time, we proposed parallelization of RCGA algorithm, Master-Worker paradigm with cluster computing technique.
More, R.M.
1986-01-01
Recent experiments with high-power pulsed lasers have strongly encouraged the development of improved theoretical understanding of highly charged ions in a dense plasma environment. This work examines the theory of dense plasmas with emphasis on general rules which govern matter at extreme high temperature and density. 106 refs., 23 figs.
Maslov, V. I.; Lotov, K. V.; Onishchenko, I. N.; Svistun, O. M.
2010-06-16
It is shown that optimal difference of frequencies of following of electron bunches and following of wake-field bubbles exists, so N-1 drive-bunches strengthen chain of wakefield bubbles and N-th bunch gets in maximal accelerating wakefield.
NASA Astrophysics Data System (ADS)
Gélvez, Tatiana C.; Rueda, Hoover F.; Arguello, Henry
2016-05-01
A hyperspectral image (HSI) can be described as a set of images with spatial information across different spectral bands. Compressive spectral imaging techniques (CSI) permit to capture a 3-dimensional hyperspectral scene using 2 dimensional coded and multiplexed projections. Recovering the original scene from a very few projections can be valuable in applications such as remote sensing, video surveillance and biomedical imaging. Typically, HSI exhibit high correlations both, in the spatial and spectral dimensions. Thus, exploiting these correlations allows to accurately recover the original scene from compressed measurements. Traditional approaches exploit the sparsity of the scene when represented in a proper basis. For this purpose, an optimization problem that seeks to minimize a joint l2 - l1 norm is solved to obtain the original scene. However, there exist some HSI with an important feature which does not have been widely exploited; HSI are commonly low rank, thus only a few number of spectral signatures are presented in the image. Therefore, this paper proposes an approach to recover a simultaneous sparse and low rank hyperspectral image by exploiting both features at the same time. The proposed approach solves an optimization problem that seeks to minimize the l2-norm, penalized by the l1-norm, to force the solution to be sparse, and penalized by the nuclear norm to force the solution to be low rank. Theoretical analysis along with a set of simulations over different data sets show that simultaneously exploiting low rank and sparse structures enhances the performance of the recovery algorithm and the quality of the recovered image with an average improvement of around 3 dB in terms of the peak-signal to noise ratio (PSNR).
User's guide for the BNW-III optimization code for modular dry/wet-cooled power plants
Braun, D.J.; Faletti, D.W.
1984-09-01
This user's guide describes BNW-III, a computer code developed by the Pacific Northwest Laboratory (PNL) as part of the Dry Cooling Enhancement Program sponsored by the US Department of Energy (DOE). The BNW-III code models a modular dry/wet cooling system for a nuclear or fossil fuel power plant. The purpose of this guide is to give the code user a brief description of what the BNW-III code is and how to use it. It describes the cooling system being modeled and the various models used. A detailed description of code input and code output is also included. The BNW-III code was developed to analyze a specific cooling system layout. However, there is a large degree of freedom in the type of cooling modules that can be selected and in the performance of those modules. The costs of the modules are input to the code, giving the user a great deal of flexibility.
Kinetic Simulations of Dense Plasma Focus Breakdown
NASA Astrophysics Data System (ADS)
Schmidt, A.; Higginson, D. P.; Jiang, S.; Link, A.; Povilus, A.; Sears, J.; Bennett, N.; Rose, D. V.; Welch, D. R.
2015-11-01
A dense plasma focus (DPF) device is a type of plasma gun that drives current through a set of coaxial electrodes to assemble gas inside the device and then implode that gas on axis to form a Z-pinch. This implosion drives hydrodynamic and kinetic instabilities that generate strong electric fields, which produces a short intense pulse of x-rays, high-energy (>100 keV) electrons and ions, and (in deuterium gas) neutrons. A strong factor in pinch performance is the initial breakdown and ionization of the gas along the insulator surface separating the two electrodes. The smoothness and isotropy of this ionized sheath are imprinted on the current sheath that travels along the electrodes, thus making it an important portion of the DPF to both understand and optimize. Here we use kinetic simulations in the Particle-in-cell code LSP to model the breakdown. Simulations are initiated with neutral gas and the breakdown modeled self-consistently as driven by a charged capacitor system. We also investigate novel geometries for the insulator and electrodes to attempt to control the electric field profile. The initial ionization fraction of gas is explored computationally to gauge possible advantages of pre-ionization which could be created experimentally via lasers or a glow-discharge. Prepared by LLNL under Contract DE-AC52-07NA27344.
Deslippe, Jack; da Jornada, Felipe H.; Vigil-Fowler, Derek; Barnes, Taylor; Wichmann, Nathan; Raman, Karthik; Sasanka, Ruchira; Louie, Steven G.
2016-10-06
We profile and optimize calculations performed with the BerkeleyGW code on the Xeon-Phi architecture. BerkeleyGW depends both on hand-tuned critical kernels as well as on BLAS and FFT libraries. We describe the optimization process and performance improvements achieved. We discuss a layered parallelization strategy to take advantage of vector, thread and node-level parallelism. We discuss locality changes (including the consequence of the lack of L3 cache) and effective use of the on-package high-bandwidth memory. We show preliminary results on Knights-Landing including a roofline study of code performance before and after a number of optimizations. We find that the GW method is particularly well-suited for many-core architectures due to the ability to exploit a large amount of parallelism over plane-wave components, band-pairs, and frequencies.
Fleishman, Gregory D.; Kuznetsov, Alexey A.
2010-10-01
Radiation produced by charged particles gyrating in a magnetic field is highly significant in the astrophysics context. Persistently increasing resolution of astrophysical observations calls for corresponding three-dimensional modeling of the radiation. However, available exact equations are prohibitively slow in computing a comprehensive table of high-resolution models required for many practical applications. To remedy this situation, we develop approximate gyrosynchrotron (GS) codes capable of quickly calculating the GS emission (in non-quantum regime) from both isotropic and anisotropic electron distributions in non-relativistic, mildly relativistic, and ultrarelativistic energy domains applicable throughout a broad range of source parameters including dense or tenuous plasmas and weak or strong magnetic fields. The computation time is reduced by several orders of magnitude compared with the exact GS algorithm. The new algorithm performance can gradually be adjusted to the user's needs depending on whether precision or computation speed is to be optimized for a given model. The codes are made available for users as a supplement to this paper.
NASA Astrophysics Data System (ADS)
Trejos, Sorayda; Fredy Barrera, John; Torroba, Roberto
2015-08-01
We present for the first time an optical encrypting-decrypting protocol for recovering messages without speckle noise. This is a digital holographic technique using a 2f scheme to process QR codes entries. In the procedure, letters used to compose eventual messages are individually converted into a QR code, and then each QR code is divided into portions. Through a holographic technique, we store each processed portion. After filtering and repositioning, we add all processed data to create a single pack, thus simplifying the handling and recovery of multiple QR code images, representing the first multiplexing procedure applied to processed QR codes. All QR codes are recovered in a single step and in the same plane, showing neither cross-talk nor noise problems as in other methods. Experiments have been conducted using an interferometric configuration and comparisons between unprocessed and recovered QR codes have been performed, showing differences between them due to the involved processing. Recovered QR codes can be successfully scanned, thanks to their noise tolerance. Finally, the appropriate sequence in the scanning of the recovered QR codes brings a noiseless retrieved message. Additionally, to procure maximum security, the multiplexed pack could be multiplied by a digital diffuser as to encrypt it. The encrypted pack is easily decoded by multiplying the multiplexing with the complex conjugate of the diffuser. As it is a digital operation, no noise is added. Therefore, this technique is threefold robust, involving multiplexing, encryption, and the need of a sequence to retrieve the outcome.
NASA Astrophysics Data System (ADS)
Salavati, S.; Coyle, T. W.; Mostaghimi, J.
2015-10-01
Open pore metallic foam core sandwich panels prepared by thermal spraying of a coating on the foam structures can be used as high-efficiency heat transfer devices due to their high surface area to volume ratio. The structural, mechanical, and physical properties of thermally sprayed skins play a significant role in the performance of the related devices. These properties are mainly controlled by the porosity content, oxide content, adhesion strength, and stiffness of the deposited coating. In this study, the effects of grit-blasting process parameters on the characteristics of the temporary surface created on the metallic foam substrate and on the twin-wire arc-sprayed alloy 625 coating subsequently deposited on the foam were investigated through response surface methodology. Characterization of the prepared surface and sprayed coating was conducted by scanning electron microscopy, roughness measurements, and adhesion testing. Using statistical design of experiments, response surface method, a model was developed to predict the effect of grit-blasting parameters on the surface roughness of the prepared foam and also the porosity content of the sprayed coating. The coating porosity and adhesion strength were found to be determined by the substrate surface roughness, which could be controlled by grit-blasting parameters. Optimization of the grit-blasting parameters was conducted using the fitted model to minimize the porosity content of the coating while maintaining a high adhesion strength.
DIANE multiparticle transport code
NASA Astrophysics Data System (ADS)
Caillaud, M.; Lemaire, S.; Ménard, S.; Rathouit, P.; Ribes, J. C.; Riz, D.
2014-06-01
DIANE is the general Monte Carlo code developed at CEA-DAM. DIANE is a 3D multiparticle multigroup code. DIANE includes automated biasing techniques and is optimized for massive parallel calculations.
Dellin, T.A.; Fish, M.J.; Yang, C.L.
1981-08-01
DELSOL2 is a revised and substantially extended version of the DELSOL computer program for calculating collector field performance and layout, and optimal system design for solar thermal central receiver plants. The code consists of a detailed model of the optical performance, a simpler model of the non-optical performance, an algorithm for field layout, and a searching algorithm to find the best system design. The latter two features are coupled to a cost model of central receiver components and an economic model for calculating energy costs. The code can handle flat, focused and/or canted heliostats, and external cylindrical, multi-aperture cavity, and flat plate receivers. The program optimizes the tower height, receiver size, field layout, heliostat spacings, and tower position at user specified power levels subject to flux limits on the receiver and land constraints for field layout. The advantages of speed and accuracy characteristic of Version I are maintained in DELSOL2.
Dense high temperature ceramic oxide superconductors
Landingham, R.L.
1993-10-12
Dense superconducting ceramic oxide articles of manufacture and methods for producing these articles are described. Generally these articles are produced by first processing these superconducting oxides by ceramic processing techniques to optimize materials properties, followed by reestablishing the superconducting state in a desired portion of the ceramic oxide composite.
Dense high temperature ceramic oxide superconductors
Landingham, Richard L.
1993-01-01
Dense superconducting ceramic oxide articles of manufacture and methods for producing these articles are described. Generally these articles are produced by first processing these superconducting oxides by ceramic processing techniques to optimize materials properties, followed by reestablishing the superconducting state in a desired portion of the ceramic oxide composite.
NASA Astrophysics Data System (ADS)
Wang, Jianhua; Cheng, Lianglun; Wang, Tao; Peng, Xiaodong
2016-03-01
Table look-up operation plays a very important role during the decoding processing of context-based adaptive variable length decoding (CAVLD) in H.264/advanced video coding (AVC). However, frequent table look-up operation can result in big table memory access, and then lead to high table power consumption. Aiming to solve the problem of big table memory access of current methods, and then reduce high power consumption, a memory-efficient table look-up optimized algorithm is presented for CAVLD. The contribution of this paper lies that index search technology is introduced to reduce big memory access for table look-up, and then reduce high table power consumption. Specifically, in our schemes, we use index search technology to reduce memory access by reducing the searching and matching operations for code_word on the basis of taking advantage of the internal relationship among length of zero in code_prefix, value of code_suffix and code_lengh, thus saving the power consumption of table look-up. The experimental results show that our proposed table look-up algorithm based on index search can lower about 60% memory access consumption compared with table look-up by sequential search scheme, and then save much power consumption for CAVLD in H.264/AVC.
Hiromasa Chitose; Akitoshi Hotta; Akira Ohnuki; Ken Fujimura
2006-07-01
The Reduced-Moderation Water Reactor (RMWR) is being developed at Japan Atomic Energy Agency and demonstration of the core heat removal performance is one of the most important issues. However, operation of the full-scale bundle experiment is difficult technically because the fuel rod bundle size is larger, which consumes huge electricity. Hence, it is expected to develop an analysis code for simulating RMWR core thermal-hydraulic performance with high accuracy. Subchannel analysis is the most powerful technique to resolve the problem. A subchannel analysis code NASCA (Nuclear-reactor Advanced Sub-Channel Analysis code) has been developed to improve capabilities of analyzing transient two-phase flow phenomena, boiling transition (BT) and post BT, and the NASCA code is applicable on the thermal-hydraulic analysis for the current BWR fuel. In the present study, the prediction accuracy of the NASCA code has been investigated using the reduced-scale rod bundle test data, and its applicability on the RMWR has been improved by optimizing the mechanistic constitutive models. (authors)
Computational electromagnetics and parallel dense matrix computations
Forsman, K.; Kettunen, L.; Gropp, W.; Levine, D.
1995-06-01
We present computational results using CORAL, a parallel, three-dimensional, nonlinear magnetostatic code based on a volume integral equation formulation. A key feature of CORAL is the ability to solve, in parallel, the large, dense systems of linear equations that are inherent in the use of integral equation methods. Using the Chameleon and PSLES libraries ensures portability and access to the latest linear algebra solution technology.
Computational electromagnetics and parallel dense matrix computations
Forsman, K.; Kettunen, L.; Gropp, W.
1995-12-01
We present computational results using CORAL, a parallel, three-dimensional, nonlinear magnetostatic code based on a volume integral equation formulation. A key feature of CORAL is the ability to solve, in parallel, the large, dense systems of linear equations that are inherent in the use of integral equation methods. Using the Chameleon and PSLES libraries ensures portability and access to the latest linear algebra solution technology.
Xu, Jingjing; Yang, Wei; Zhang, Linyuan; Han, Ruisong; Shao, Xiaotao
2015-01-01
In this paper, a wireless sensor network (WSN) technology adapted to underground channel conditions is developed, which has important theoretical and practical value for safety monitoring in underground coal mines. According to the characteristics that the space, time and frequency resources of underground tunnel are open, it is proposed to constitute wireless sensor nodes based on multicarrier code division multiple access (MC-CDMA) to make full use of these resources. To improve the wireless transmission performance of source sensor nodes, it is also proposed to utilize cooperative sensors with good channel conditions from the sink node to assist source sensors with poor channel conditions. Moreover, the total power of the source sensor and its cooperative sensors is allocated on the basis of their channel conditions to increase the energy efficiency of the WSN. To solve the problem that multiple access interference (MAI) arises when multiple source sensors transmit monitoring information simultaneously, a kind of multi-sensor detection (MSD) algorithm with particle swarm optimization (PSO), namely D-PSO, is proposed for the time-frequency coded cooperative MC-CDMA WSN. Simulation results show that the average bit error rate (BER) performance of the proposed WSN in an underground coal mine is improved significantly by using wireless sensor nodes based on MC-CDMA, adopting time-frequency coded cooperative transmission and D-PSO algorithm with particle swarm optimization. PMID:26343660
Xu, Jingjing; Yang, Wei; Zhang, Linyuan; Han, Ruisong; Shao, Xiaotao
2015-08-27
In this paper, a wireless sensor network (WSN) technology adapted to underground channel conditions is developed, which has important theoretical and practical value for safety monitoring in underground coal mines. According to the characteristics that the space, time and frequency resources of underground tunnel are open, it is proposed to constitute wireless sensor nodes based on multicarrier code division multiple access (MC-CDMA) to make full use of these resources. To improve the wireless transmission performance of source sensor nodes, it is also proposed to utilize cooperative sensors with good channel conditions from the sink node to assist source sensors with poor channel conditions. Moreover, the total power of the source sensor and its cooperative sensors is allocated on the basis of their channel conditions to increase the energy efficiency of the WSN. To solve the problem that multiple access interference (MAI) arises when multiple source sensors transmit monitoring information simultaneously, a kind of multi-sensor detection (MSD) algorithm with particle swarm optimization (PSO), namely D-PSO, is proposed for the time-frequency coded cooperative MC-CDMA WSN. Simulation results show that the average bit error rate (BER) performance of the proposed WSN in an underground coal mine is improved significantly by using wireless sensor nodes based on MC-CDMA, adopting time-frequency coded cooperative transmission and D-PSO algorithm with particle swarm optimization.
Fragility in dense suspensions
NASA Astrophysics Data System (ADS)
Mari, Romain; Cates, Mike
Dense suspensions can jam under shear when the volume fraction of solid material is large enough. In this work we investigate the mechanical properties of shear jammed suspensions with numerical simulations. In particular, we address the issue of the fragility of these systems, i.e., the type of mechanical response (elastic or plastic) they show when subject to a mechanical load differing from the one applied during their preparation history.
Salko, Robert K.; Schmidt, Rodney C.; Avramova, Maria N.
2014-11-23
This study describes major improvements to the computational infrastructure of the CTF subchannel code so that full-core, pincell-resolved (i.e., one computational subchannel per real bundle flow channel) simulations can now be performed in much shorter run-times, either in stand-alone mode or as part of coupled-code multi-physics calculations. These improvements support the goals of the Department Of Energy Consortium for Advanced Simulation of Light Water Reactors (CASL) Energy Innovation Hub to develop high fidelity multi-physics simulation tools for nuclear energy design and analysis.
Soto, Marcelo A; Taki, Mohammad; Bolognini, Gabriele; Di Pasquale, Fabrizio
2012-03-26
Sub-meter distributed optical fiber sensing based on Brillouin optical time-domain analysis with differential pulse-width pairs (DPP-BOTDA) is combined with the use of optical pre-amplification and pulse coding. In order to provide significant measurement SNR enhancement and to avoid distortions in the Brillouin gain spectrum due to acoustic-wave pre-excitation, the pulse width and duty cycle of Simplex coding based on return-to-zero pulses are optimized through simulations. In addition, the use of linear optical pre-amplification increases the receiver sensitivity and the overall dynamic range of DPP-BOTDA measurements. Experimental results demonstrate for first time a spatial resolution of ~25 cm over a 60 km standard single-mode fiber (equivalent to ~240 k discrete sensing points) with temperature resolution of 1.2°C and strain resolution of 24 με.
NASA Astrophysics Data System (ADS)
Ren, Danping; Wu, Shanshan; Zhang, Lijing
2016-09-01
In view of the characteristics of the global control and flexible monitor of software-defined networks (SDN), we proposes a new optical access network architecture dedicated to Wavelength Division Multiplexing-Passive Optical Network (WDM-PON) systems based on SDN. The network coding (NC) technology is also applied into this architecture to enhance the utilization of wavelength resource and reduce the costs of light source. Simulation results show that this scheme can optimize the throughput of the WDM-PON network, greatly reduce the system time delay and energy consumption.
Dense matter at RAON: Challenges and possibilities
NASA Astrophysics Data System (ADS)
Lee, Yujeong; Lee, Chang-Hwan; Gaitanos, T.; Kim, Youngman
2016-11-01
Dense nuclear matter is ubiquitous in modern nuclear physics because it is related to many interesting microscopic and macroscopic phenomena such as heavy ion collisions, nuclear structure, and neutron stars. The on-going rare isotope science project in Korea will build up a rare isotope accelerator complex called RAON. One of the main goals of RAON is to investigate rare isotope physics including dense nuclear matter. Using the relativistic Boltzmann-Uehling-Uhlenbeck (RBUU) transport code, we estimate the properties of nuclear matter that can be created from low-energy heavyion collisions at RAON.We give predictions for the maximum baryon density, the isospin asymmetry and the temperature of nuclear matter that would be formed during 197Au+197Au and 132Sn+64Ni reactions. With a large isospin asymmetry, various theoretical studies indicate that the critical densities or temperatures of phase transitions to exotic states decrease. Because a large isospin asymmetry is expected in the dense matter created at RAON, we discuss possibilities of observing exotic states of dense nuclear matter at RAON for large isospin asymmetry.
Lebedev, Alexander; Pham, Tien Thang; Beltrán, Marta; Yu, Xianbin; Ukhanova, Anna; Llorente, Roberto; Monroy, Idelfonso Tafur; Forchhammer, Søren
2011-12-12
The paper addresses the problem of distribution of high-definition video over fiber-wireless networks. The physical layer architecture with the low complexity envelope detection solution is investigated. We present both experimental studies and simulation of high quality high-definition compressed video transmission over 60 GHz fiber-wireless link. Using advanced video coding we satisfy low complexity and low delay constraints, meanwhile preserving the superb video quality after significantly extended wireless distance.
NASA Astrophysics Data System (ADS)
Braaten, Eric; Mohapatra, Abhishek; Zhang, Hong
2016-09-01
If the dark matter particles are axions, gravity can cause them to coalesce into axion stars, which are stable gravitationally bound systems of axions. In the previously known solutions for axion stars, gravity and the attractive force between pairs of axions are balanced by the kinetic pressure. The mass of these dilute axion stars cannot exceed a critical mass, which is about 10-14M⊙ if the axion mass is 10-4 eV . We study axion stars using a simple approximation to the effective potential of the nonrelativistic effective field theory for axions. We find a new branch of dense axion stars in which gravity is balanced by the mean-field pressure of the axion Bose-Einstein condensate. The mass on this branch ranges from about 10-20M⊙ to about M⊙ . If a dilute axion star with the critical mass accretes additional axions and collapses, it could produce a bosenova, leaving a dense axion star as the remnant.
NASA Astrophysics Data System (ADS)
Mohapatra, Abhishek; Braaten, Eric; Zhang, Hong
2016-03-01
If the dark matter consists of axions, gravity can cause them to coalesce into axion stars, which are stable gravitationally bound Bose-Einstein condensates of axions. In the previously known axion stars, gravity and the attractive force between pairs of axions are balanced by the kinetic pressure. If the axion mass energy is mc2 =10-4 eV, these dilute axion stars have a maximum mass of about 10-14M⊙ . We point out that there are also dense axion stars in which gravity is balanced by the mean-field pressure of the axion condensate. We study axion stars using the leading term in a systematically improvable approximation to the effective potential of the nonrelativistic effective field theory for axions. Using the Thomas-Fermi approximation in which the kinetic pressure is neglected, we find a sequence of new branches of axion stars in which gravity is balanced by the mean-field interaction energy of the axion condensate. If mc2 =10-4 4 eV, the first branch of these dense axion stars has mass ranging from about 10-11M⊙ toabout M⊙.
NASA Astrophysics Data System (ADS)
Zhang, Wendy; Dodge, Kevin M.; Peters, Ivo R.; Ellowitz, Jake; Klein Schaarsberg, Martin H.; Jaeger, Heinrich M.
2014-03-01
Upon impact onto a solid surface at several meters-per-second, a dense suspension plug splashes by ejecting liquid-coated particles. We study the mechanism for splash formation using experiments and a numerical model. In the model, the dense suspension is idealized as a collection of cohesionless, rigid grains with finite surface roughness. The grains also experience lubrication drag as they approach, collide inelastically and rebound away from each other. Simulations using this model reproduce the measured momentum distribution of ejected particles. They also provide direct evidence supporting the conclusion from earlier experiments that inelastic collisions, rather than viscous drag, dominate when the suspension contains macroscopic particles immersed in a low-viscosity solvent such as water. Finally, the simulations reveal two distinct routes for splash formation: a particle can be ejected by a single high momentum-change collision. More surprisingly, a succession of small momentum-change collisions can accumulate to eject a particle outwards. Supported by NSF through its MRSEC program (DMR-0820054) and fluid dynamics program (CBET-1336489).
NASA Astrophysics Data System (ADS)
Dodge, Kevin M.; Peters, Ivo R.; Ellowitz, Jake; Schaarsberg, Martin H. Klein; Jaeger, Heinrich M.; Zhang, Wendy W.
2014-11-01
Impact of a dense suspension drop onto a solid surface at speeds of several meters-per-second splashes by ejecting individual liquid-coated particles. Suppression or reduction of this splash is important for thermal spray coating and additive manufacturing. Accomplishing this aim requires distinguishing whether the splash is generated by individual scattering events or by collective motion reminiscent of liquid flow. Since particle inertia dominates over surface tension and viscous drag in a strong splash, we model suspension splash using a discrete-particle simulation in which the densely packed macroscopic particles experience inelastic collisions but zero friction or cohesion. Numerical results based on this highly simplified model are qualitatively consistent with observations. They also show that approximately 70% of the splash is generated by collective motion. Here an initially downward-moving particle is ejected into the splash because it experiences a succession of low-momentum-change collisions whose effects do not cancel but instead accumulate. The remainder of the splash is generated by scattering events in which a small number of high-momentum-change collisions cause a particle to be ejected upwards. Current Address: Physics of Fluids Group, University of Twente, P.O. Box 217, 7500 AE Enschede, The Netherlands.
Braaten, Eric; Mohapatra, Abhishek; Zhang, Hong
2016-09-16
If the dark matter particles are axions, gravity can cause them to coalesce into axion stars, which are stable gravitationally bound systems of axions. In the previously known solutions for axion stars, gravity and the attractive force between pairs of axions are balanced by the kinetic pressure. The mass of these dilute axion stars cannot exceed a critical mass, which is about 10^{-14}M_{⊙} if the axion mass is 10^{-4} eV. We study axion stars using a simple approximation to the effective potential of the nonrelativistic effective field theory for axions. We find a new branch of dense axion stars in which gravity is balanced by the mean-field pressure of the axion Bose-Einstein condensate. The mass on this branch ranges from about 10^{-20}M_{⊙} to about M_{⊙}. If a dilute axion star with the critical mass accretes additional axions and collapses, it could produce a bosenova, leaving a dense axion star as the remnant.
NASA Astrophysics Data System (ADS)
Valenza, Ryan A.; Seidler, Gerald T.
2016-03-01
The intense femtosecond-scale pulses from x-ray free electron lasers (XFELs) are able to create and interrogate interesting states of matter characterized by long-lived nonequilibrium semicore or core electron occupancies or by the heating of dense phases via the relaxation cascade initiated by the photoelectric effect. We address here the latter case of "warm dense matter" (WDM) and investigate the observable consequences of x-ray heating of the electronic degrees of freedom in crystalline systems. We report temperature-dependent density functional theory calculations for the x-ray diffraction from crystalline LiF, graphite, diamond, and Be. We find testable, strong signatures of condensed-phase effects that emphasize the importance of wide-angle scattering to study nonequilibrium states. These results also suggest that the reorganization of the valence electron density at eV-scale temperatures presents a confounding factor to achieving atomic resolution in macromolecular serial femtosecond crystallography (SFX) studies at XFELs, as performed under the "diffract before destroy" paradigm.
NASA Technical Reports Server (NTRS)
Carlson, Harry W.; Darden, Christine M.; Mann, Michael J.
1990-01-01
Extensive correlations of computer code results with experimental data are employed to illustrate the use of a linearized theory, attached flow method for the estimation and optimization of the longitudinal aerodynamic performance of wing-canard and wing-horizontal tail configurations which may employ simple hinged flap systems. Use of an attached flow method is based on the premise that high levels of aerodynamic efficiency require a flow that is as nearly attached as circumstances permit. The results indicate that linearized theory, attached flow, computer code methods (modified to include estimated attainable leading-edge thrust and an approximate representation of vortex forces) provide a rational basis for the estimation and optimization of aerodynamic performance at subsonic speeds below the drag rise Mach number. Generally, good prediction of aerodynamic performance, as measured by the suction parameter, can be expected for near optimum combinations of canard or horizontal tail incidence and leading- and trailing-edge flap deflections at a given lift coefficient (conditions which tend to produce a predominantly attached flow).
NASA Technical Reports Server (NTRS)
Carlson, Harry W.; Darden, Christine M.
1988-01-01
Extensive correlations of computer code results with experimental data are employed to illustrate the use of linearized theory attached flow methods for the estimation and optimization of the aerodynamic performance of simple hinged flap systems. Use of attached flow methods is based on the premise that high levels of aerodynamic efficiency require a flow that is as nearly attached as circumstances permit. A variety of swept wing configurations are considered ranging from fighters to supersonic transports, all with leading- and trailing-edge flaps for enhancement of subsonic aerodynamic efficiency. The results indicate that linearized theory attached flow computer code methods provide a rational basis for the estimation and optimization of flap system aerodynamic performance at subsonic speeds. The analysis also indicates that vortex flap design is not an opposing approach but is closely related to attached flow design concepts. The successful vortex flap design actually suppresses the formation of detached vortices to produce a small vortex which is restricted almost entirely to the leading edge flap itself.
Salko, Robert K; Schmidt, Rodney; Avramova, Maria N
2014-01-01
This paper describes major improvements to the computational infrastructure of the CTF sub-channel code so that full-core sub-channel-resolved simulations can now be performed in much shorter run-times, either in stand-alone mode or as part of coupled-code multi-physics calculations. These improvements support the goals of the Department Of Energy (DOE) Consortium for Advanced Simulations of Light Water (CASL) Energy Innovation Hub to develop high fidelity multi-physics simulation tools for nuclear energy design and analysis. A set of serial code optimizations--including fixing computational inefficiencies, optimizing the numerical approach, and making smarter data storage choices--are first described and shown to reduce both execution time and memory usage by about a factor of ten. Next, a Single Program Multiple Data (SPMD) parallelization strategy targeting distributed memory Multiple Instruction Multiple Data (MIMD) platforms and utilizing domain-decomposition is presented. In this approach, data communication between processors is accomplished by inserting standard MPI calls at strategic points in the code. The domain decomposition approach implemented assigns one MPI process to each fuel assembly, with each domain being represented by its own CTF input file. The creation of CTF input files, both for serial and parallel runs, is also fully automated through use of a pre-processor utility that takes a greatly reduced set of user input over the traditional CTF input file. To run CTF in parallel, two additional libraries are currently needed; MPI, for inter-processor message passing, and the Parallel Extensible Toolkit for Scientific Computation (PETSc), which is leveraged to solve the global pressure matrix in parallel. Results presented include a set of testing and verification calculations and performance tests assessing parallel scaling characteristics up to a full core, sub-channel-resolved model of Watts Bar Unit 1 under hot full-power conditions (193 17x17
NASA Astrophysics Data System (ADS)
Ohana, N.; Jocksch, A.; Lanti, E.; Tran, T. M.; Brunner, S.; Gheller, C.; Hariri, F.; Villard, L.
2016-11-01
With the aim of enabling state-of-the-art gyrokinetic PIC codes to benefit from the performance of recent multithreaded devices, we developed an application from a platform called the “PIC-engine” [1, 2, 3] embedding simplified basic features of the PIC method. The application solves the gyrokinetic equations in a sheared plasma slab using B-spline finite elements up to fourth order to represent the self-consistent electrostatic field. Preliminary studies of the so-called Particle-In-Fourier (PIF) approach, which uses Fourier modes as basis functions in the periodic dimensions of the system instead of the real-space grid, show that this method can be faster than PIC for simulations with a small number of Fourier modes. Similarly to the PIC-engine, multiple levels of parallelism have been implemented using MPI+OpenMP [2] and MPI+OpenACC [1], the latter exploiting the computational power of GPUs without requiring complete code rewriting. It is shown that sorting particles [3] can lead to performance improvement by increasing data locality and vectorizing grid memory access. Weak scalability tests have been successfully run on the GPU-equipped Cray XC30 Piz Daint (at CSCS) up to 4,096 nodes. The reduced time-to-solution will enable more realistic and thus more computationally intensive simulations of turbulent transport in magnetic fusion devices.
NASA Technical Reports Server (NTRS)
Hill, S. A.
1994-01-01
BUMPERII is a modular program package employing a numerical solution technique to calculate a spacecraft's probability of no penetration (PNP) from man-made orbital debris or meteoroid impacts. The solution equation used to calculate the PNP is based on the Poisson distribution model for similar analysis of smaller craft, but reflects the more rigorous mathematical modeling of spacecraft geometry, orientation, and impact characteristics necessary for treatment of larger structures such as space station components. The technique considers the spacecraft surface in terms of a series of flat plate elements. It divides the threat environment into a number of finite cases, then evaluates each element of each threat. The code allows for impact shielding (shadowing) of one element by another in various configurations over the spacecraft exterior, and also allows for the effects of changing spacecraft flight orientation and attitude. Four main modules comprise the overall BUMPERII package: GEOMETRY, RESPONSE, SHIELD, and CONTOUR. The GEOMETRY module accepts user-generated finite element model (FEM) representations of the spacecraft geometry and creates geometry databases for both meteoroid and debris analysis. The GEOMETRY module expects input to be in either SUPERTAB Universal File Format or PATRAN Neutral File Format. The RESPONSE module creates wall penetration response databases, one for meteoroid analysis and one for debris analysis, for up to 100 unique wall configurations. This module also creates a file containing critical diameter as a function of impact velocity and impact angle for each wall configuration. The SHIELD module calculates the PNP for the modeled structure given exposure time, operating altitude, element ID ranges, and the data from the RESPONSE and GEOMETRY databases. The results appear in a summary file. SHIELD will also determine the effective area of the components and the overall model, and it can produce a data file containing the probability
NASA Astrophysics Data System (ADS)
Schimeczek, C.; Engel, D.; Wunner, G.
2014-05-01
account the shielding of the core potential for outer electrons by inner electrons, and an optimal finite-element decomposition of each individual longitudinal wave function. These measures largely enhance the convergence properties compared to the previous code and lead to speed-ups by factors up to two orders of magnitude compared with the implementation of the Hartree-Fock-Roothaan method used by Engel and Wunner in [D. Engel, G. Wunner, Phys. Rev. A 78, 032515 (2008)].
Yu, Lianchun; Liu, Liwei
2014-03-01
The generation and conduction of action potentials (APs) represents a fundamental means of communication in the nervous system and is a metabolically expensive process. In this paper, we investigate the energy efficiency of neural systems in transferring pulse signals with APs. By analytically solving a bistable neuron model that mimics the AP generation with a particle crossing the barrier of a double well, we find the optimal number of ion channels that maximizes the energy efficiency of a neuron. We also investigate the energy efficiency of a neuron population in which the input pulse signals are represented with synchronized spikes and read out with a downstream coincidence detector neuron. We find an optimal number of neurons in neuron population, as well as the number of ion channels in each neuron that maximizes the energy efficiency. The energy efficiency also depends on the characters of the input signals, e.g., the pulse strength and the interpulse intervals. These results are confirmed by computer simulation of the stochastic Hodgkin-Huxley model with a detailed description of the ion channel random gating. We argue that the tradeoff between signal transmission reliability and energy cost may influence the size of the neural systems when energy use is constrained.
Ariel's Densely Pitted Surface
NASA Technical Reports Server (NTRS)
1986-01-01
This mosaic of the four highest-resolution images of Ariel represents the most detailed Voyager 2 picture of this satellite of Uranus. The images were taken through the clear filter of Voyager's narrow-angle camera on Jan. 24, 1986, at a distance of about 130,000 kilometers (80,000 miles). Ariel is about 1,200 km (750 mi) in diameter; the resolution here is 2.4 km (1.5 mi). Much of Ariel's surface is densely pitted with craters 5 to 10 km (3 to 6 mi) across. These craters are close to the threshold of detection in this picture. Numerous valleys and fault scarps crisscross the highly pitted terrain. Voyager scientists believe the valleys have formed over down-dropped fault blocks (graben); apparently, extensive faulting has occurred as a result of expansion and stretching of Ariel's crust. The largest fault valleys, near the terminator at right, as well as a smooth region near the center of this image, have been partly filled with deposits that are younger and less heavily cratered than the pitted terrain. Narrow, somewhat sinuous scarps and valleys have been formed, in turn, in these young deposits. It is not yet clear whether these sinuous features have been formed by faulting or by the flow of fluids.
JPL manages the Voyager project for NASA's Office of Space Science.
Mercury's Densely Cratered Surface
NASA Technical Reports Server (NTRS)
1974-01-01
Mariner 10 took this picture (FDS 27465) of the densely cratered surface of Mercury when the spacecraft was 18,200 kilometers (8085 miles) from the planet on March 29. The dark line across top of picture is a 'dropout' of a few TV lines of data. At lower left, a portion of a 61 kilometer (38 mile) crater shows a flow front extending across the crater floor and filling more than half of the crater. The smaller, fresh crater at center is about 25 kilometers (15 miles) in diameter. Craters as small as one kilometer (about one-half mile) across are visible in the picture.
The Mariner 10 mission, managed by the Jet Propulsion Laboratory for NASA's Office of Space Science, explored Venus in February 1974 on the way to three encounters with Mercury-in March and September 1974 and in March 1975. The spacecraft took more than 7,000 photos of Mercury, Venus, the Earth and the Moon.
Image Credit: NASA/JPL/Northwestern University
Carver, Charles S.; Scheier, Michael F.; Segerstrom, Suzanne C.
2010-01-01
Optimism is an individual difference variable that reflects the extent to which people hold generalized favorable expectancies for their future. Higher levels of optimism have been related prospectively to better subjective well-being in times of adversity or difficulty (i.e., controlling for previous well-being). Consistent with such findings, optimism has been linked to higher levels of engagement coping and lower levels of avoidance, or disengagement, coping. There is evidence that optimism is associated with taking proactive steps to protect one's health, whereas pessimism is associated with health-damaging behaviors. Consistent with such findings, optimism is also related to indicators of better physical health. The energetic, task-focused approach that optimists take to goals also relates to benefits in the socioeconomic world. Some evidence suggests that optimism relates to more persistence in educational efforts and to higher later income. Optimists also appear to fare better than pessimists in relationships. Although there are instances in which optimism fails to convey an advantage, and instances in which it may convey a disadvantage, those instances are relatively rare. In sum, the behavioral patterns of optimists appear to provide models of living for others to learn from. PMID:20170998
2014-12-01
density parity check (LDPC) code, a Reed–Solomon code, and three convolutional codes. iii CONTENTS EXECUTIVE SUMMARY...the most common. Many civilian systems use low density parity check (LDPC) FEC codes, and the Navy is planning to use LDPC for some future systems...other forward error correction methods: a turbo code, a low density parity check (LDPC) code, a Reed–Solomon code, and three convolutional codes
Concatenated Coding Using Trellis-Coded Modulation
NASA Technical Reports Server (NTRS)
Thompson, Michael W.
1997-01-01
In the late seventies and early eighties a technique known as Trellis Coded Modulation (TCM) was developed for providing spectrally efficient error correction coding. Instead of adding redundant information in the form of parity bits, redundancy is added at the modulation stage thereby increasing bandwidth efficiency. A digital communications system can be designed to use bandwidth-efficient multilevel/phase modulation such as Amplitude Shift Keying (ASK), Phase Shift Keying (PSK), Differential Phase Shift Keying (DPSK) or Quadrature Amplitude Modulation (QAM). Performance gain can be achieved by increasing the number of signals over the corresponding uncoded system to compensate for the redundancy introduced by the code. A considerable amount of research and development has been devoted toward developing good TCM codes for severely bandlimited applications. More recently, the use of TCM for satellite and deep space communications applications has received increased attention. This report describes the general approach of using a concatenated coding scheme that features TCM and RS coding. Results have indicated that substantial (6-10 dB) performance gains can be achieved with this approach with comparatively little bandwidth expansion. Since all of the bandwidth expansion is due to the RS code we see that TCM based concatenated coding results in roughly 10-50% bandwidth expansion compared to 70-150% expansion for similar concatenated scheme which use convolution code. We stress that combined coding and modulation optimization is important for achieving performance gains while maintaining spectral efficiency.
NASA Astrophysics Data System (ADS)
Eremets, M.; Troyan, I.
2012-12-01
Hydrogen at ambient pressures and low temperatures forms a molecular crystal which is expected to display metallic properties under megabar pressures. This metal is predicted to be superconducting with a very high critical temperature Tc of 200-400 K. The superconductor may potentially be recovered metastably at ambient pressures, and it may acquire a new quantum state as a metallic superfluid and a superconducting superfluid. Recent experiments performed at low temperatures T < 100 K showed that at record pressures of 300 GPa, hydrogen remains in the molecular state and is an insulator with a band gap of appr 2 eV. Given our current experimental and theoretical understanding, hydrogen is expected to become metallic at pressures of 400-500 GPa, beyond the current limits of static pressures achievable using diamond anvil cells. We found that at room temperature and pressure > 220 GPa, new Raman modes arose, providing evidence for the transformation to a new opaque and electrically conductive phase IV. Above 260 GPa, in the next phase V, hydrogen reflected light well. Its resistance was nearly temperature-independent over a wide temperature range, down to 30 K, indicating that the hydrogen was metallic. Releasing the pressure induced the metallic phase to transform directly into molecular hydrogen with significant hysteresis at 200 GPa and 295 K. These data were published in our paper: M. I. Eremets and I. A. Troyan "Conductive dense hydrogen." Nature Materials 10: 927-931. We will present also new results on hydrogen: phase diagram with phases IV and V determined in P,T domain up to 300 GPa and 350 K. We will also discuss possible structures of phase IV based on our Raman and infrared measurements up to 300 GPa.
Zamani, M.; Kasesaz, Y.; Khalafi, H.; Shayesteh, M.
2015-07-01
In order to gain the neutron spectrum with proper components specification for BNCT, it is necessary to design a Beam Shape Assembling (BSA), include of moderator, collimator, reflector, gamma filter and thermal neutrons filter, in front of the initial radiation beam from the source. According to the result of MCNP4C simulation, the Northwest beam tube has the most optimized neuron flux between three north beam tubes of Tehran Research Reactor (TRR). So, it has been chosen for this purpose. Simulation of the BSA has been done in four above mentioned phases. In each stage, ten best configurations of materials with different length and width were selected as the candidates for the next stage. The last BSA configuration includes of: 78 centimeters of air as an empty space, 40 centimeters of Iron plus 52 centimeters of heavy-water as moderator, 30 centimeters of water or 90 centimeters of Aluminum-Oxide as a reflector, 1 millimeters of lithium (Li) as thermal neutrons filter and finally 3 millimeters of Bismuth (Bi) as a filter of gamma radiation. The result of Calculations shows that if we use this BSA configuration for TRR Northwest beam tube, then the best neutron flux and spectrum will be achieved for BNCT. (authors)
A quasi-dense matching approach and its calibration application with Internet photos.
Wan, Yanli; Miao, Zhenjiang; Wu, Q M Jonathan; Wang, Xifu; Tang, Zhen; Wang, Zhifei
2015-03-01
This paper proposes a quasi-dense matching approach to the automatic acquisition of camera parameters, which is required for recovering 3-D information from 2-D images. An affine transformation-based optimization model and a new matching cost function are used to acquire quasi-dense correspondences with high accuracy in each pair of views. These correspondences can be effectively detected and tracked at the sub-pixel level in multiviews with our neighboring view selection strategy. A two-layer iteration algorithm is proposed to optimize 3-D quasi-dense points and camera parameters. In the inner layer, different optimization strategies based on local photometric consistency and a global objective function are employed to optimize the 3-D quasi-dense points and camera parameters, respectively. In the outer layer, quasi-dense correspondences are resampled to guide a new estimation and optimization process of the camera parameters. We demonstrate the effectiveness of our algorithm with several experiments.
Clinical coding. Code breakers.
Mathieson, Steve
2005-02-24
--The advent of payment by results has seen the role of the clinical coder pushed to the fore in England. --Examinations for a clinical coding qualification began in 1999. In 2004, approximately 200 people took the qualification. --Trusts are attracting people to the role by offering training from scratch or through modern apprenticeships.
Finding Hierarchical and Overlapping Dense Subgraphs using Nucleus Decompositions
Seshadhri, Comandur; Pinar, Ali; Sariyuce, Ahmet Erdem; Catalyurek, Umit
2014-11-01
Finding dense substructures in a graph is a fundamental graph mining operation, with applications in bioinformatics, social networks, and visualization to name a few. Yet most standard formulations of this problem (like clique, quasiclique, k-densest subgraph) are NP-hard. Furthermore, the goal is rarely to nd the \\true optimum", but to identify many (if not all) dense substructures, understand their distribution in the graph, and ideally determine a hierarchical structure among them. Current dense subgraph nding algorithms usually optimize some objective, and only nd a few such subgraphs without providing any hierarchy. It is also not clear how to account for overlaps in dense substructures. We de ne the nucleus decomposition of a graph, which represents the graph as a forest of nuclei. Each nucleus is a subgraph where smaller cliques are present in many larger cliques. The forest of nuclei is a hierarchy by containment, where the edge density increases as we proceed towards leaf nuclei. Sibling nuclei can have limited intersections, which allows for discovery of overlapping dense subgraphs. With the right parameters, the nuclear decomposition generalizes the classic notions of k-cores and k-trusses. We give provable e cient algorithms for nuclear decompositions, and empirically evaluate their behavior in a variety of real graphs. The tree of nuclei consistently gives a global, hierarchical snapshot of dense substructures, and outputs dense subgraphs of higher quality than other state-of-theart solutions. Our algorithm can process graphs with tens of millions of edges in less than an hour.
NASA Technical Reports Server (NTRS)
Hribar, Michelle R.; Frumkin, Michael; Jin, Haoqiang; Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)
1998-01-01
Over the past decade, high performance computing has evolved rapidly; systems based on commodity microprocessors have been introduced in quick succession from at least seven vendors/families. Porting codes to every new architecture is a difficult problem; in particular, here at NASA, there are many large CFD applications that are very costly to port to new machines by hand. The LCM ("Legacy Code Modernization") Project is the development of an integrated parallelization environment (IPE) which performs the automated mapping of legacy CFD (Fortran) applications to state-of-the-art high performance computers. While most projects to port codes focus on the parallelization of the code, we consider porting to be an iterative process consisting of several steps: 1) code cleanup, 2) serial optimization,3) parallelization, 4) performance monitoring and visualization, 5) intelligent tools for automated tuning using performance prediction and 6) machine specific optimization. The approach for building this parallelization environment is to build the components for each of the steps simultaneously and then integrate them together. The demonstration will exhibit our latest research in building this environment: 1. Parallelizing tools and compiler evaluation. 2. Code cleanup and serial optimization using automated scripts 3. Development of a code generator for performance prediction 4. Automated partitioning 5. Automated insertion of directives. These demonstrations will exhibit the effectiveness of an automated approach for all the steps involved with porting and tuning a legacy code application for a new architecture.
Bombin, H.
2010-03-15
We introduce a family of two-dimensional (2D) topological subsystem quantum error-correcting codes. The gauge group is generated by two-local Pauli operators, so that two-local measurements are enough to recover the error syndrome. We study the computational power of code deformation in these codes and show that boundaries cannot be introduced in the usual way. In addition, we give a general mapping connecting suitable classical statistical mechanical models to optimal error correction in subsystem stabilizer codes that suffer from depolarizing noise.
Analysis of dense particulate flow dynamics using a Euler-Lagrange approach
NASA Astrophysics Data System (ADS)
Desjardins, Olivier; Pepiot, Perrine
2009-11-01
Thermochemical conversion of biomass to biofuels relies heavily on dense particulate flows to enhance heat and mass transfers. While CFD tools can provide very valuable insights on reactor design and optimization, accurate simulations of these flows remain extremely challenging due to the complex coupling between the gas and solid phases. In this work, Lagrangian particle tracking has been implemented in the arbitrarily high order parallel LES/DNS code NGA [Desjardins et al., JCP, 2008]. Collisions are handled using a soft-sphere model, while a combined least squares/mollification approach is adopted to accurately transfer data between the Lagrangian particles and the Eulerian gas phase mesh, regardless of the particle diameter to mesh size ratio. The energy conservation properties of the numerical scheme are assessed and a detailed statistical analysis of the dynamics of a periodic fluidized bed with a uniform velocity inlet is conducted.
New quantum MDS-convolutional codes derived from constacyclic codes
NASA Astrophysics Data System (ADS)
Li, Fengwei; Yue, Qin
2015-12-01
In this paper, we utilize a family of Hermitian dual-containing constacyclic codes to construct classical and quantum MDS convolutional codes. Our classical and quantum convolutional codes are optimal in the sense that they attain the classical (quantum) generalized Singleton bound.
NASA Astrophysics Data System (ADS)
Schimeczek, C.; Engel, D.; Wunner, G.
2012-07-01
account the shielding of the core potential for outer electrons by inner electrons, and an optimal finite-element decomposition of each individual longitudinal wave function. These measures largely enhance the convergence properties compared to the previous code, and lead to speed-ups by factors up to two orders of magnitude compared with the implementation of the Hartree-Fock-Roothaan method used by Engel and Wunner in [D. Engel, G. Wunner, Phys. Rev. A 78 (2008) 032515]. New version program summaryProgram title: HFFER II Catalogue identifier: AECC_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECC_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: v 55 130 No. of bytes in distributed program, including test data, etc.: 293 700 Distribution format: tar.gz Programming language: Fortran 95 Computer: Cluster of 1-13 HP Compaq dc5750 Operating system: Linux Has the code been vectorized or parallelized?: Yes, parallelized using MPI directives. RAM: 1 GByte per node Classification: 2.1 External routines: MPI/GFortran, LAPACK, BLAS, FMlib (included in the package) Catalogue identifier of previous version: AECC_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 302 Does the new version supersede the previous version?: Yes Nature of problem: Quantitative modellings of features observed in the X-ray spectra of isolated magnetic neutron stars are hampered by the lack of sufficiently large and accurate databases for atoms and ions up to the last fusion product, iron, at strong magnetic field strengths. Our code is intended to provide a powerful tool for calculating energies and oscillator strengths of medium-Z atoms and ions at neutron star magnetic field strengths with sufficient accuracy in a routine way to create such databases. Solution method: The
Dense LU Factorization on Multicore Supercomputer Nodes
Lifflander, Jonathan; Miller, Phil; Venkataraman, Ramprasad; Arya, Anshu; Jones, Terry R; Kale, Laxmikant V
2012-01-01
Dense LU factorization is a prominent benchmark used to rank the performance of supercomputers. Many implementations, including the reference code HPL, use block-cyclic distributions of matrix blocks onto a two-dimensional process grid. The process grid dimensions drive a trade-off between communication and computation and are architecture- and implementation-sensitive. We show how the critical panel factorization steps can be made less communication-bound by overlapping asynchronous collectives for pivot identification and exchange with the computation of rank-k updates. By shifting this trade-off, a modified block-cyclic distribution can beneficially exploit more available parallelism on the critical path, and reduce panel factorization's memory hierarchy contention on now-ubiquitous multi-core architectures. The missed parallelism in traditional block-cyclic distributions arises because active panel factorization, triangular solves, and subsequent broadcasts are spread over single process columns or rows (respectively) of the process grid. Increasing one dimension of the process grid decreases the number of distinct processes in the other dimension. To increase parallelism in both dimensions, periodic 'rotation' is applied to the process grid to recover the row-parallelism lost by a tall process grid. During active panel factorization, rank-1 updates stream through memory with minimal reuse. In a column-major process grid, the performance of this access pattern degrades as too many streaming processors contend for access to memory. A block-cyclic mapping in the more popular row-major order does not encounter this problem, but consequently sacrifices node and network locality in the critical pivoting steps. We introduce 'striding' to vary between the two extremes of row- and column-major process grids. As a test-bed for further mapping experiments, we describe a dense LU implementation that allows a block distribution to be defined as a general function of block
A look at scalable dense linear algebra libraries
Dongarra, J.J. |; van de Geijn, R.; Walker, D.W.
1992-07-01
We discuss the essential design features of a library of scalable software for performing dense linear algebra computations on distributed memory concurrent computers. The square block scattered decomposition is proposed as a flexible and general-purpose way of decomposing most, if not all, dense matrix problems. An object- oriented interface to the library permits more portable applications to be written, and is easy to learn and use, since details of the parallel implementation are hidden from the user. Experiments on the Intel Touchstone Delta system with a prototype code that uses the square block scattered decomposition to perform LU factorization are presented and analyzed. It was found that the code was both scalable and efficient, performing at about 14 Gflop/s (double precision) for the largest problem considered.
Chemical Laser Computer Code Survey,
1980-12-01
DOCUMENTATION: Resonator Geometry Synthesis Code Requi rement NV. L. Gamiz); Incorporate General Resonator into Ray Trace Code (W. H. Southwell... Synthesis Code Development (L. R. Stidhm) CATEGRY ATIUEOPTICS KINETICS GASOYNAM41CS None * None *iNone J.LEVEL Simrple Fabry Perot Simple SaturatedGt... Synthesis Co2de Require- ment (V L. ami l ncor~orate General Resonatorn into Ray Trace Code (W. H. Southwel) Srace Optimization Algorithms and Equations (W
Ultra-dense Hot Low Z Line Transition Opacity Simulations
NASA Astrophysics Data System (ADS)
Sauvan, P.; Mínguez, E.; Gil, J. M.; Rodríguez, R.; Rubiano, J. G.; Martel, P.; Angelo, P.; Schott, R.; Philippe, F.; Leboucher-Dalimier, E.; Mancini, R.; Calisti, A.
2002-12-01
In this work two atomic physics models (the IDEFIX code using the dicenter model and the code based on parametric potentials ANALOP) have been used to calculate the opacities for bound-bound transitions in hot ultra-dense, low Z plasmas. These simulations are in connection with experiments carried out at LULI during the last two years, focused on bound-bound radiation. In this paper H-like opacities for aluminum and fluorine plasmas have been simulated, using both theoretical models, in a wide range of densities and temperatures higher than 200 eV.
Parametric bleaching of dense plasmas
NASA Astrophysics Data System (ADS)
Gradov, O. M.; Ramazashvili, R. R.
1981-11-01
A mechanism is proposed for the nonlinear bleaching of a dense plasma slab. In this new mechanism, the electromagnetic wave incident on the plasma decays into plasma waves and then reappears as a result of the coalescence of the plasma waves at the second boundary of the slab.
Baumann, K; Weber, U; Simeonov, Y; Zink, K
2015-06-15
Purpose: Aim of this study was to optimize the magnetic field strengths of two quadrupole magnets in a particle therapy facility in order to obtain a beam quality suitable for spot beam scanning. Methods: The particle transport through an ion-optic system of a particle therapy facility consisting of the beam tube, two quadrupole magnets and a beam monitor system was calculated with the help of Matlab by using matrices that solve the equation of motion of a charged particle in a magnetic field and field-free region, respectively. The magnetic field strengths were optimized in order to obtain a circular and thin beam spot at the iso-center of the therapy facility. These optimized field strengths were subsequently transferred to the Monte-Carlo code FLUKA and the transport of 80 MeV/u C12-ions through this ion-optic system was calculated by using a user-routine to implement magnetic fields. The fluence along the beam-axis and at the iso-center was evaluated. Results: The magnetic field strengths could be optimized by using Matlab and transferred to the Monte-Carlo code FLUKA. The implementation via a user-routine was successful. Analyzing the fluence-pattern along the beam-axis the characteristic focusing and de-focusing effects of the quadrupole magnets could be reproduced. Furthermore the beam spot at the iso-center was circular and significantly thinner compared to an unfocused beam. Conclusion: In this study a Matlab tool was developed to optimize magnetic field strengths for an ion-optic system consisting of two quadrupole magnets as part of a particle therapy facility. These magnetic field strengths could subsequently be transferred to and implemented in the Monte-Carlo code FLUKA to simulate the particle transport through this optimized ion-optic system.
Resnik, Barry I
2009-01-01
It is ethical, legal, and proper for a dermatologist to maximize income through proper coding of patient encounters and procedures. The overzealous physician can misinterpret reimbursement requirements or receive bad advice from other physicians and cross the line from aggressive coding to coding fraud. Several of the more common problem areas are discussed.
FALCON or how to compute measures time efficiently on dynamically evolving dense complex networks?
Franke, R; Ivanova, G
2014-02-01
A large number of topics in biology, medicine, neuroscience, psychology and sociology can be generally described via complex networks in order to investigate fundamental questions of structure, connectivity, information exchange and causality. Especially, research on biological networks like functional spatiotemporal brain activations and changes, caused by neuropsychiatric pathologies, is promising. Analyzing those so-called complex networks, the calculation of meaningful measures can be very long-winded depending on their size and structure. Even worse, in many labs only standard desktop computers are accessible to perform those calculations. Numerous investigations on complex networks regard huge but sparsely connected network structures, where most network nodes are connected to only a few others. Currently, there are several libraries available to tackle this kind of networks. A problem arises when not only a few big and sparse networks have to be analyzed, but hundreds or thousands of smaller and conceivably dense networks (e.g. in measuring brain activation over time). Then every minute per network is crucial. For these cases there several possibilities to use standard hardware more efficiently. It is not sufficient to apply just standard algorithms for dense graph characteristics. This article introduces the new library FALCON developed especially for the exploration of dense complex networks. Currently, it offers 12 different measures (like clustering coefficients), each for undirected-unweighted, undirected-weighted and directed-unweighted networks. It uses a multi-core approach in combination with comprehensive code and hardware optimizations. There is an alternative massively parallel GPU implementation for the most time-consuming measures, too. Finally, a comparing benchmark is integrated to support the choice of the most suitable library for a particular network issue.
Warm Dense Matter: An Overview
Kalantar, D H; Lee, R W; Molitoris, J D
2004-04-21
This document provides a summary of the ''LLNL Workshop on Extreme States of Materials: Warm Dense Matter to NIF'' which was held on 20, 21, and 22 February 2002 at the Wente Conference Center in Livermore, CA. The warm dense matter regime, the transitional phase space region between cold material and hot plasma, is presently poorly understood. The drive to understand the nature of matter in this regime is sparking scientific activity worldwide. In addition to pure scientific interest, finite temperature dense matter occurs in the regimes of interest to the SSMP (Stockpile Stewardship Materials Program). So that obtaining a better understanding of WDM is important to performing effective experiments at, e.g., NIF, a primary mission of LLNL. At this workshop we examined current experimental and theoretical work performed at, and in conjunction with, LLNL to focus future activities and define our role in this rapidly emerging research area. On the experimental front LLNL plays a leading role in three of the five relevant areas and has the opportunity to become a major player in the other two. Discussion at the workshop indicated that the path forward for the experimental efforts at LLNL were two fold: First, we are doing reasonable baseline work at SPLs, HE, and High Energy Lasers with more effort encouraged. Second, we need to plan effectively for the next evolution in large scale facilities, both laser (NIF) and Light/Beam sources (LCLS/TESLA and GSI) Theoretically, LLNL has major research advantages in areas as diverse as the thermochemical approach to warm dense matter equations of state to first principles molecular dynamics simulations. However, it was clear that there is much work to be done theoretically to understand warm dense matter. Further, there is a need for a close collaboration between the generation of verifiable experimental data that can provide benchmarks of both the experimental techniques and the theoretical capabilities. The conclusion of this
Boundary Preserving Dense Local Regions.
Kim, Jaechul; Grauman, Kristen
2015-05-01
We propose a dense local region detector to extract features suitable for image matching and object recognition tasks. Whereas traditional local interest operators rely on repeatable structures that often cross object boundaries (e.g., corners, scale-space blobs), our sampling strategy is driven by segmentation, and thus preserves object boundaries and shape. At the same time, whereas existing region-based representations are sensitive to segmentation parameters and object deformations, our novel approach to robustly sample dense sites and determine their connectivity offers better repeatability. In extensive experiments, we find that the proposed region detector provides significantly better repeatability and localization accuracy for object matching compared to an array of existing feature detectors. In addition, we show our regions lead to excellent results on two benchmark tasks that require good feature matching: weakly supervised foreground discovery and nearest neighbor-based object recognition.
Radiative properties of dense nanofluids.
Wei, Wei; Fedorov, Andrei G; Luo, Zhongyang; Ni, Mingjiang
2012-09-01
The radiative properties of dense nanofluids are investigated. For nanofluids, scattering and absorbing of electromagnetic waves by nanoparticles, as well as light absorption by the matrix/fluid in which the nanoparticles are suspended, should be considered. We compare five models for predicting apparent radiative properties of nanoparticulate media and evaluate their applicability. Using spectral absorption and scattering coefficients predicted by different models, we compute the apparent transmittance of a nanofluid layer, including multiple reflecting interfaces bounding the layer, and compare the model predictions with experimental results from the literature. Finally, we propose a new method to calculate the spectral radiative properties of dense nanofluids that shows quantitatively good agreement with the experimental results.
Coding Theory and Projective Spaces
NASA Astrophysics Data System (ADS)
Silberstein, Natalia
2008-05-01
The projective space of order n over a finite field F_q is a set of all subspaces of the vector space F_q^{n}. In this work, we consider error-correcting codes in the projective space, focusing mainly on constant dimension codes. We start with the different representations of subspaces in the projective space. These representations involve matrices in reduced row echelon form, associated binary vectors, and Ferrers diagrams. Based on these representations, we provide a new formula for the computation of the distance between any two subspaces in the projective space. We examine lifted maximum rank distance (MRD) codes, which are nearly optimal constant dimension codes. We prove that a lifted MRD code can be represented in such a way that it forms a block design known as a transversal design. The incidence matrix of the transversal design derived from a lifted MRD code can be viewed as a parity-check matrix of a linear code in the Hamming space. We find the properties of these codes which can be viewed also as LDPC codes. We present new bounds and constructions for constant dimension codes. First, we present a multilevel construction for constant dimension codes, which can be viewed as a generalization of a lifted MRD codes construction. This construction is based on a new type of rank-metric codes, called Ferrers diagram rank-metric codes. Then we derive upper bounds on the size of constant dimension codes which contain the lifted MRD code, and provide a construction for two families of codes, that attain these upper bounds. We generalize the well-known concept of a punctured code for a code in the projective space to obtain large codes which are not constant dimension. We present efficient enumerative encoding and decoding techniques for the Grassmannian. Finally we describe a search method for constant dimension lexicodes.
An efficient fully atomistic potential model for dense fluid methane
NASA Astrophysics Data System (ADS)
Jiang, Chuntao; Ouyang, Jie; Zhuang, Xin; Wang, Lihua; Li, Wuming
2016-08-01
A fully atomistic model aimed to obtain a general purpose model for the dense fluid methane is presented. The new optimized potential for liquid simulation (OPLS) model is a rigid five site model which consists of five fixed point charges and five Lennard-Jones centers. The parameters in the potential model are determined by a fit of the experimental data of dense fluid methane using molecular dynamics simulation. The radial distribution function and the diffusion coefficient are successfully calculated for dense fluid methane at various state points. The simulated results are in good agreement with the available experimental data shown in literature. Moreover, the distribution of mean number hydrogen bonds and the distribution of pair-energy are analyzed, which are obtained from the new model and other five reference potential models. Furthermore, the space-time correlation functions for dense fluid methane are also discussed. All the numerical results demonstrate that the new OPLS model could be well utilized to investigate the dense fluid methane.
NASA Astrophysics Data System (ADS)
Yang, Qianli; Pitkow, Xaq
2015-03-01
Most interesting natural sensory stimuli are encoded in the brain in a form that can only be decoded nonlinearly. But despite being a core function of the brain, nonlinear population codes are rarely studied and poorly understood. Interestingly, the few existing models of nonlinear codes are inconsistent with known architectural features of the brain. In particular, these codes have information content that scales with the size of the cortical population, even if that violates the data processing inequality by exceeding the amount of information entering the sensory system. Here we provide a valid theory of nonlinear population codes by generalizing recent work on information-limiting correlations in linear population codes. Although these generalized, nonlinear information-limiting correlations bound the performance of any decoder, they also make decoding more robust to suboptimal computation, allowing many suboptimal decoders to achieve nearly the same efficiency as an optimal decoder. Although these correlations are extremely difficult to measure directly, particularly for nonlinear codes, we provide a simple, practical test by which one can use choice-related activity in small populations of neurons to determine whether decoding is suboptimal or optimal and limited by correlated noise. We conclude by describing an example computation in the vestibular system where this theory applies. QY and XP was supported by a grant from the McNair foundation.
Kubilius, Jonas
2014-01-01
Sharing code is becoming increasingly important in the wake of Open Science. In this review I describe and compare two popular code-sharing utilities, GitHub and Open Science Framework (OSF). GitHub is a mature, industry-standard tool but lacks focus towards researchers. In comparison, OSF offers a one-stop solution for researchers but a lot of functionality is still under development. I conclude by listing alternative lesser-known tools for code and materials sharing.
Constructing Dense Graphs with Unique Hamiltonian Cycles
ERIC Educational Resources Information Center
Lynch, Mark A. M.
2012-01-01
It is not difficult to construct dense graphs containing Hamiltonian cycles, but it is difficult to generate dense graphs that are guaranteed to contain a unique Hamiltonian cycle. This article presents an algorithm for generating arbitrarily large simple graphs containing "unique" Hamiltonian cycles. These graphs can be turned into dense graphs…
Coordinated design of coding and modulation systems
NASA Technical Reports Server (NTRS)
Massey, J. L.; Ancheta, T.; Johannesson, R.; Lauer, G.; Lee, L.
1976-01-01
The joint optimization of the coding and modulation systems employed in telemetry systems was investigated. Emphasis was placed on formulating inner and outer coding standards used by the Goddard Spaceflight Center. Convolutional codes were found that are nearly optimum for use with Viterbi decoding in the inner coding of concatenated coding systems. A convolutional code, the unit-memory code, was discovered and is ideal for inner system usage because of its byte-oriented structure. Simulations of sequential decoding on the deep-space channel were carried out to compare directly various convolutional codes that are proposed for use in deep-space systems.
Probing Cold Dense Nuclear Matter
Subedi, Ramesh; Shneor, R.; Monaghan, Peter; Anderson, Bryon; Aniol, Konrad; Annand, John; Arrington, John; Benaoum, Hachemi; Benmokhtar, Fatiha; Bertozzi, William; Boeglin, Werner; Chen, Jian-Ping; Choi, Seonho; Cisbani, Evaristo; Craver, Brandon; Frullani, Salvatore; Garibaldi, Franco; Gilad, Shalev; Gilman, Ronald; Glamazdin, Oleksandr; Hansen, Jens-Ole; Higinbotham, Douglas; Holmstrom, Timothy; Ibrahim, Hassan; Igarashi, Ryuichi; De Jager, Cornelis; Jans, Eddy; Jiang, Xiaodong; Kaufman, Lisa; Kelleher, Aidan; Kolarkar, Ameya; Kumbartzki, Gerfried; LeRose, John; Lindgren, Richard; Liyanage, Nilanga; Margaziotis, Demetrius; Markowitz, Pete; Marrone, Stefano; Mazouz, Malek; Meekins, David; Michaels, Robert; Moffit, Bryan; Perdrisat, Charles; Piasetzky, Eliazer; Potokar, Milan; Punjabi, Vina; Qiang, Yi; Reinhold, Joerg; Ron, Guy; Rosner, Guenther; Saha, Arunava; Sawatzky, Bradley; Shahinyan, Albert; Sirca, Simon; Slifer, Karl; Solvignon, Patricia; Sulkosky, Vince; Sulkosky, Vincent; Sulkosky, Vince; Sulkosky, Vincent; Urciuoli, Guido; Voutier, Eric; Watson, John; Weinstein, Lawrence; Wojtsekhowski, Bogdan; Wood, Stephen; Zheng, Xiaochao; Zhu, Lingyan
2008-06-01
The protons and neutrons in a nucleus can form strongly correlated nucleon pairs. Scattering experiments, in which a proton is knocked out of the nucleus with high-momentum transfer and high missing momentum, show that in carbon-12 the neutron-proton pairs are nearly 20 times as prevalent as proton-proton pairs and, by inference, neutron-neutron pairs. This difference between the types of pairs is due to the nature of the strong force and has implications for understanding cold dense nuclear systems such as neutron stars.
NASA Technical Reports Server (NTRS)
Watson, Andrew B.
1990-01-01
All vision systems, both human and machine, transform the spatial image into a coded representation. Particular codes may be optimized for efficiency or to extract useful image features. Researchers explored image codes based on primary visual cortex in man and other primates. Understanding these codes will advance the art in image coding, autonomous vision, and computational human factors. In cortex, imagery is coded by features that vary in size, orientation, and position. Researchers have devised a mathematical model of this transformation, called the Hexagonal oriented Orthogonal quadrature Pyramid (HOP). In a pyramid code, features are segregated by size into layers, with fewer features in the layers devoted to large features. Pyramid schemes provide scale invariance, and are useful for coarse-to-fine searching and for progressive transmission of images. The HOP Pyramid is novel in three respects: (1) it uses a hexagonal pixel lattice, (2) it uses oriented features, and (3) it accurately models most of the prominent aspects of primary visual cortex. The transform uses seven basic features (kernels), which may be regarded as three oriented edges, three oriented bars, and one non-oriented blob. Application of these kernels to non-overlapping seven-pixel neighborhoods yields six oriented, high-pass pyramid layers, and one low-pass (blob) layer.
Embedded foveation image coding.
Wang, Z; Bovik, A C
2001-01-01
The human visual system (HVS) is highly space-variant in sampling, coding, processing, and understanding. The spatial resolution of the HVS is highest around the point of fixation (foveation point) and decreases rapidly with increasing eccentricity. By taking advantage of this fact, it is possible to remove considerable high-frequency information redundancy from the peripheral regions and still reconstruct a perceptually good quality image. Great success has been obtained previously by a class of embedded wavelet image coding algorithms, such as the embedded zerotree wavelet (EZW) and the set partitioning in hierarchical trees (SPIHT) algorithms. Embedded wavelet coding not only provides very good compression performance, but also has the property that the bitstream can be truncated at any point and still be decoded to recreate a reasonably good quality image. In this paper, we propose an embedded foveation image coding (EFIC) algorithm, which orders the encoded bitstream to optimize foveated visual quality at arbitrary bit-rates. A foveation-based image quality metric, namely, foveated wavelet image quality index (FWQI), plays an important role in the EFIC system. We also developed a modified SPIHT algorithm to improve the coding efficiency. Experiments show that EFIC integrates foveation filtering with foveated image coding and demonstrates very good coding performance and scalability in terms of foveated image quality measurement.
Joo, Balint
2014-09-16
A simple code-generator to generate the low level code kernels used by the QPhiX Library for Lattice QCD. Generates Kernels for Wilson-Dslash, and Wilson-Clover kernels. Can be reused to write other optimized kernels for Intel Xeon Phi(tm), Intel Xeon(tm) and potentially other architectures.
Inference by replication in densely connected systems
Neirotti, Juan P.; Saad, David
2007-10-15
An efficient Bayesian inference method for problems that can be mapped onto dense graphs is presented. The approach is based on message passing where messages are averaged over a large number of replicated variable systems exposed to the same evidential nodes. An assumption about the symmetry of the solutions is required for carrying out the averages; here we extend the previous derivation based on a replica-symmetric- (RS)-like structure to include a more complex one-step replica-symmetry-breaking-like (1RSB-like) ansatz. To demonstrate the potential of the approach it is employed for studying critical properties of the Ising linear perceptron and for multiuser detection in code division multiple access (CDMA) under different noise models. Results obtained under the RS assumption in the noncritical regime give rise to a highly efficient signal detection algorithm in the context of CDMA; while in the critical regime one observes a first-order transition line that ends in a continuous phase transition point. Finite size effects are also observed. While the 1RSB ansatz is not required for the original problems, it was applied to the CDMA signal detection problem with a more complex noise model that exhibits RSB behavior, resulting in an improvement in performance.
Dense crystalline packings of ellipsoids
NASA Astrophysics Data System (ADS)
Jin, Weiwei; Jiao, Yang; Liu, Lufeng; Yuan, Ye; Li, Shuixiang
2017-03-01
An ellipsoid, the simplest nonspherical shape, has been extensively used as a model for elongated building blocks for a wide spectrum of molecular, colloidal, and granular systems. Yet the densest packing of congruent hard ellipsoids, which is intimately related to the high-density phase of many condensed matter systems, is still an open problem. We discover an unusual family of dense crystalline packings of self-dual ellipsoids (ratios of the semiaxes α : √{α }:1 ), containing 24 particles with a quasi-square-triangular (SQ-TR) tiling arrangement in the fundamental cell. The associated packing density ϕ exceeds that of the densest known SM2 crystal [ A. Donev et al., Phys. Rev. Lett. 92, 255506 (2004), 10.1103/PhysRevLett.92.255506] for aspect ratios α in (1.365, 1.5625), attaining a maximal ϕ ≈0.758 06 ... at α = 93 /64 . We show that the SQ-TR phase derived from these dense packings is thermodynamically stable at high densities over the aforementioned α range and report a phase diagram for self-dual ellipsoids. The discovery of the SQ-TR crystal suggests organizing principles for nonspherical particles and self-assembly of colloidal systems.
Random Coding Bounds for DNA Codes Based on Fibonacci Ensembles of DNA Sequences
2008-07-01
COVERED (From - To) 6 Jul 08 – 11 Jul 08 4. TITLE AND SUBTITLE RANDOM CODING BOUNDS FOR DNA CODES BASED ON FIBONACCI ENSEMBLES OF DNA SEQUENCES ... sequences which are generalizations of the Fibonacci sequences . 15. SUBJECT TERMS DNA Codes, Fibonacci Ensembles, DNA Computing, Code Optimization 16...coding bound on the rate of DNA codes is proved. To obtain the bound, we use some ensembles of DNA sequences which are generalizations of the Fibonacci
Kubilius, Jonas
2014-01-01
Sharing code is becoming increasingly important in the wake of Open Science. In this review I describe and compare two popular code-sharing utilities, GitHub and Open Science Framework (OSF). GitHub is a mature, industry-standard tool but lacks focus towards researchers. In comparison, OSF offers a one-stop solution for researchers but a lot of functionality is still under development. I conclude by listing alternative lesser-known tools for code and materials sharing. PMID:25165519
Dense deformation field estimation for brain intraoperative images registration
NASA Astrophysics Data System (ADS)
De Craene, Mathieu S.; du Bois d'Aische, Aloys; Talos, Ion-Florin; Ferrant, Matthieu; Black, Peter M.; Jolesz, Ferenc; Kikinis, Ron; Macq, Benoit; Warfield, Simon K.
2004-05-01
A new fast non rigid registration algorithm is presented. The algorithm estimates a dense deformation field by optimizing a criterion that measures image similarity by mutual information and regularizes with a linear elastic energy term. The optimal deformation field is found using a Simultaneous Perturbation Stochastic Approximation to the gradient. The implementation is parallelized for symmetric multi-processor architectures. This algorithm was applied to capture non-rigid brain deformations that occur during neurosurgery. Segmentation of the intra-operative data is not required but preoperative segmentation of the brain allows the algorithm to be robust to artifacts due to the craniotomy.
Velocity coherence in dense cores
NASA Astrophysics Data System (ADS)
Goodman, Alyssa A.; Barranco, Joseph A.; Wilner, David J.; Heyer, Mark H.
1997-02-01
At the meeting, we presented a summary of two papers which support the hypothesis that the molecular clouds which contain star-forming low-mass dense cores are self-similar in nature on size scales larger than an inner scale, Rcoh, and that within Rcoh, the cores are ``coherent,'' in that their filling factor is large and they are characterized by a very small, roughly constant, mildly supersonic velocity dispersion. We expect these two papers, by Barranco & Goodman [1] and Goodman, Barranco, Wilner, & Heyer, to appear in the Astrophysical Journal within the coming year. Here, we present a short summary of our results. The interested reader is urged to consult the on-line version of this work at cfa-www.harvard.edu/~agoodman/vel_coh.html [2].
Viscoelastic behavior of dense microemulsions
NASA Astrophysics Data System (ADS)
Cametti, C.; Codastefano, P.; D'arrigo, G.; Tartaglia, P.; Rouch, J.; Chen, S. H.
1990-09-01
We have performed extensive measurements of shear viscosity, ultrasonic absorption, and sound velocity in a ternary system consisting of water-decane-sodium di(2-ethylhexyl)sulfo- succinate(AOT), in the one-phase region where it forms a water-in-oil microemulsion. We observe a rapid increase of the static shear viscosity in the dense microemulsion region. Correspondingly the sound absorption shows unambiguous evidence of a viscoelastic behavior. The absorption data for various volume fractions and temperatures can be reduced to a universal curve by scaling both the absorption and the frequency by the measured static shear viscosity. The sound absorption can be interpreted as coming from the high-frequency tail of the viscoelastic relaxation, describable by a Cole-Cole relaxation formula with unusually small elastic moduli.
Uniformly dense polymeric foam body
Whinnery, Jr., Leroy
2003-07-15
A method for providing a uniformly dense polymer foam body having a density between about 0.013 g/cm.sup.3 to about 0.5 g/cm.sup.3 is disclosed. The method utilizes a thermally expandable polymer microsphere material wherein some of the microspheres are unexpanded and some are only partially expanded. It is shown that by mixing the two types of materials in appropriate ratios to achieve the desired bulk final density, filling a mold with this mixture so as to displace all or essentially all of the internal volume of the mold, heating the mold for a predetermined interval at a temperature above about 130.degree. C., and then cooling the mold to a temperature below 80.degree. C. the molded part achieves a bulk density which varies by less then about .+-.6% everywhere throughout the part volume.
Neutrino Oscillations in Dense Matter
NASA Astrophysics Data System (ADS)
Lobanov, A. E.
2017-03-01
A modification of the electroweak theory, where the fermions with the same electroweak quantum numbers are combined in multiplets and are treated as different quantum states of a single particle, is proposed. In this model, mixing and oscillations of particles arise as a direct consequence of the general principles of quantum field theory. The developed approach enables one to calculate the probabilities of the processes taking place in the detector at long distances from the particle source. Calculations of higher-order processes, including computation of the contributions due to radiative corrections, can be performed in the framework of the perturbation theory using the regular diagram technique. As a result, the analog to the Dirac-Schwinger equation of quantum electrodynamics describing neutrino oscillations and its spin rotation in dense matter can be obtained.
Extended thermodynamics of dense gases
NASA Astrophysics Data System (ADS)
Arima, T.; Taniguchi, S.; Ruggeri, T.; Sugiyama, M.
2012-11-01
We study extended thermodynamics of dense gases by adopting the system of field equations with a different hierarchy structure to that adopted in the previous works. It is the theory of 14 fields of mass density, velocity, temperature, viscous stress, dynamic pressure, and heat flux. As a result, most of the constitutive equations can be determined explicitly by the caloric and thermal equations of state. It is shown that the rarefied-gas limit of the theory is consistent with the kinetic theory of gases. We also analyze three physically important systems, that is, a gas with the virial equations of state, a hard-sphere system, and a van der Waals fluid, by using the general theory developed in the former part of the present work.
Kondo, K.; Kanesue, T.; Horioka, K.; Okamura, M.
2010-05-23
Warm Dense Matter (WDM) offers an challenging problem because WDM, which is beyond ideal plasma, is in a low temperature and high density state with partially degenerate electrons and coupled ions. WDM is a common state of matter in astrophysical objects such as cores of giant planets and white dwarfs. The WDM studies require large energy deposition into a small target volume in a shorter time than the hydrodynamical time and need uniformity across the full thickness of the target. Since moderate energy ion beams ({approx} 0.3 MeV/u) can be useful tool for WDM physics, we propose WDM generation using Direct Plasma Injection Scheme (DPIS). In the DPIS, laser ion source is connected to the Radio Frequency Quadrupole (RFQ) linear accelerator directly without the beam transport line. DPIS with a realistic final focus and a linear accelerator can produce WDM.
Sellers, C.; Fleming, K.; Bidwell, D.; Forbes, P.
1996-12-01
This paper presents an application of ASME Code Case OMN-1 to the GL 89-10 Program at the South Texas Project Electric Generating Station (STPEGS). Code Case OMN-1 provides guidance for a performance-based MOV inservice test program that can be used for periodic verification testing and allows consideration of risk insights. Blended probabilistic and deterministic evaluation techniques were used to establish inservice test strategies including both test methods and test frequency. Described in the paper are the methods and criteria for establishing MOV safety significance based on the STPEGS probabilistic safety assessment, deterministic considerations of MOV performance characteristics and performance margins, the expert panel evaluation process, and the development of inservice test strategies. Test strategies include a mix of dynamic and static testing as well as MOV exercising.
Multiple Satellite Trajectory Optimization
2004-12-01
SOLVING OPTIMAL CONTROL PROBLEMS ........................................5...OPTIMIZATION A. SOLVING OPTIMAL CONTROL PROBLEMS The driving principle used to solve optimal control problems was first formalized by the Soviet...methods and processes of solving optimal control problems , this section will demonstrate how the formulations work as expected. Once coded, the
The performance of dense medium processes
Horsfall, D.W.
1993-12-31
Dense medium washing in baths and cyclones is widely carried out in South Africa. The paper shows the reason for the preferred use of dense medium processes rather than gravity concentrators such as jigs. The factors leading to efficient separation in baths are listed and an indication given of the extent to which these factors may be controlled and embodied in the deployment of baths and dense medium cyclones in the planning stages of a plant.
Dense module enumeration in biological networks
NASA Astrophysics Data System (ADS)
Tsuda, Koji; Georgii, Elisabeth
2009-12-01
Analysis of large networks is a central topic in various research fields including biology, sociology, and web mining. Detection of dense modules (a.k.a. clusters) is an important step to analyze the networks. Though numerous methods have been proposed to this aim, they often lack mathematical rigorousness. Namely, there is no guarantee that all dense modules are detected. Here, we present a novel reverse-search-based method for enumerating all dense modules. Furthermore, constraints from additional data sources such as gene expression profiles or customer profiles can be integrated, so that we can systematically detect dense modules with interesting profiles. We report successful applications in human protein interaction network analyses.
Ravishankar, C., Hughes Network Systems, Germantown, MD
1998-05-08
Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the
Dense ceramic membranes for methane conversion
Balachandran, U.; Mieville, R.L.; Ma, B.; Udovich, C.A.
1996-05-01
This report focuses on a mechanism for oxygen transport through mixed- oxide conductors as used in dense ceramic membrane reactors for the partial oxidation of methane to syngas (CO and H{sub 2}). The in-situ separation of O{sub 2} from air by the membrane reactor saves the costly cryogenic separation step that is required in conventional syngas production. The mixed oxide of choice is SrCo{sub 0.5}FeO{sub x}, which exhibits high oxygen permeability and has been shown in previous studies to possess high stability in both oxidizing and reducing conditions; in addition, it can be readily formed into reactor configurations such as tubes. An understanding of the electrical properties and the defect dynamics in this material is essential and will help us to find the optimal operating conditions for the conversion reactor. In this paper, we discuss the conductivities of the SrFeCo{sub 0.5}O{sub x} system that are dependent on temperature and partial pressure of oxygen. Based on the experimental results, a defect model is proposed to explain the electrical properties of this system. The oxygen permeability of SrFeCo{sub 0.5}O{sub x} is estimated by using conductivity data and is compared with that obtained from methane conversion reaction.
Understanding shape entropy through local dense packing
van Anders, Greg; Klotsa, Daphne; Ahmed, N. Khalid; Engel, Michael; Glotzer, Sharon C.
2014-01-01
Entropy drives the phase behavior of colloids ranging from dense suspensions of hard spheres or rods to dilute suspensions of hard spheres and depletants. Entropic ordering of anisotropic shapes into complex crystals, liquid crystals, and even quasicrystals was demonstrated recently in computer simulations and experiments. The ordering of shapes appears to arise from the emergence of directional entropic forces (DEFs) that align neighboring particles, but these forces have been neither rigorously defined nor quantified in generic systems. Here, we show quantitatively that shape drives the phase behavior of systems of anisotropic particles upon crowding through DEFs. We define DEFs in generic systems and compute them for several hard particle systems. We show they are on the order of a few times the thermal energy (kBT) at the onset of ordering, placing DEFs on par with traditional depletion, van der Waals, and other intrinsic interactions. In experimental systems with these other interactions, we provide direct quantitative evidence that entropic effects of shape also contribute to self-assembly. We use DEFs to draw a distinction between self-assembly and packing behavior. We show that the mechanism that generates directional entropic forces is the maximization of entropy by optimizing local particle packing. We show that this mechanism occurs in a wide class of systems and we treat, in a unified way, the entropy-driven phase behavior of arbitrary shapes, incorporating the well-known works of Kirkwood, Onsager, and Asakura and Oosawa. PMID:25344532
Understanding shape entropy through local dense packing
van Anders, Greg; Klotsa, Daphne; Ahmed, N. Khalid; ...
2014-10-24
Entropy drives the phase behavior of colloids ranging from dense suspensions of hard spheres or rods to dilute suspensions of hard spheres and depletants. Entropic ordering of anisotropic shapes into complex crystals, liquid crystals, and even quasicrystals was demonstrated recently in computer simulations and experiments. The ordering of shapes appears to arise from the emergence of directional entropic forces (DEFs) that align neighboring particles, but these forces have been neither rigorously defined nor quantified in generic systems. In this paper, we show quantitatively that shape drives the phase behavior of systems of anisotropic particles upon crowding through DEFs. We definemore » DEFs in generic systems and compute them for several hard particle systems. We show they are on the order of a few times the thermal energy (kBT) at the onset of ordering, placing DEFs on par with traditional depletion, van der Waals, and other intrinsic interactions. In experimental systems with these other interactions, we provide direct quantitative evidence that entropic effects of shape also contribute to self-assembly. We use DEFs to draw a distinction between self-assembly and packing behavior. We show that the mechanism that generates directional entropic forces is the maximization of entropy by optimizing local particle packing. Finally, we show that this mechanism occurs in a wide class of systems and we treat, in a unified way, the entropy-driven phase behavior of arbitrary shapes, incorporating the well-known works of Kirkwood, Onsager, and Asakura and Oosawa.« less
Understanding shape entropy through local dense packing
van Anders, Greg; Klotsa, Daphne; Ahmed, N. Khalid; Engel, Michael; Glotzer, Sharon C.
2014-10-24
Entropy drives the phase behavior of colloids ranging from dense suspensions of hard spheres or rods to dilute suspensions of hard spheres and depletants. Entropic ordering of anisotropic shapes into complex crystals, liquid crystals, and even quasicrystals was demonstrated recently in computer simulations and experiments. The ordering of shapes appears to arise from the emergence of directional entropic forces (DEFs) that align neighboring particles, but these forces have been neither rigorously defined nor quantified in generic systems. In this paper, we show quantitatively that shape drives the phase behavior of systems of anisotropic particles upon crowding through DEFs. We define DEFs in generic systems and compute them for several hard particle systems. We show they are on the order of a few times the thermal energy (k_{B}T) at the onset of ordering, placing DEFs on par with traditional depletion, van der Waals, and other intrinsic interactions. In experimental systems with these other interactions, we provide direct quantitative evidence that entropic effects of shape also contribute to self-assembly. We use DEFs to draw a distinction between self-assembly and packing behavior. We show that the mechanism that generates directional entropic forces is the maximization of entropy by optimizing local particle packing. Finally, we show that this mechanism occurs in a wide class of systems and we treat, in a unified way, the entropy-driven phase behavior of arbitrary shapes, incorporating the well-known works of Kirkwood, Onsager, and Asakura and Oosawa.
Dense packings of the Platonic and Archimedean solids.
Torquato, S; Jiao, Y
2009-08-13
Dense particle packings have served as useful models of the structures of liquid, glassy and crystalline states of matter, granular media, heterogeneous materials and biological systems. Probing the symmetries and other mathematical properties of the densest packings is a problem of interest in discrete geometry and number theory. Previous work has focused mainly on spherical particles-very little is known about dense polyhedral packings. Here we formulate the generation of dense packings of polyhedra as an optimization problem, using an adaptive fundamental cell subject to periodic boundary conditions (we term this the 'adaptive shrinking cell' scheme). Using a variety of multi-particle initial configurations, we find the densest known packings of the four non-tiling Platonic solids (the tetrahedron, octahedron, dodecahedron and icosahedron) in three-dimensional Euclidean space. The densities are 0.782..., 0.947..., 0.904... and 0.836..., respectively. Unlike the densest tetrahedral packing, which must not be a Bravais lattice packing, the densest packings of the other non-tiling Platonic solids that we obtain are their previously known optimal (Bravais) lattice packings. Combining our simulation results with derived rigorous upper bounds and theoretical arguments leads us to the conjecture that the densest packings of the Platonic and Archimedean solids with central symmetry are given by their corresponding densest lattice packings. This is the analogue of Kepler's sphere conjecture for these solids.
Approximate hard-sphere method for densely packed granular flows.
Guttenberg, Nicholas
2011-05-01
The simulation of granular media is usually done either with event-driven codes that treat collisions as instantaneous but have difficulty with very dense packings, or with molecular dynamics (MD) methods that approximate rigid grains using a stiff viscoelastic spring. There is a little-known method that combines several collision events into a single timestep to retain the instantaneous collisions of event-driven dynamics, but also be able to handle dense packings. However, it is poorly characterized as to its regime of validity and failure modes. We present a modification of this method to reduce the introduction of overlap error, and test it using the problem of two-dimensional (2D) granular Couette flow, a densely packed system that has been well characterized by previous work. We find that this method can successfully replicate the results of previous work up to the point of jamming, and that it can do so a factor of 10 faster than comparable MD methods.
Approximate hard-sphere method for densely packed granular flows
NASA Astrophysics Data System (ADS)
Guttenberg, Nicholas
2011-05-01
The simulation of granular media is usually done either with event-driven codes that treat collisions as instantaneous but have difficulty with very dense packings, or with molecular dynamics (MD) methods that approximate rigid grains using a stiff viscoelastic spring. There is a little-known method that combines several collision events into a single timestep to retain the instantaneous collisions of event-driven dynamics, but also be able to handle dense packings. However, it is poorly characterized as to its regime of validity and failure modes. We present a modification of this method to reduce the introduction of overlap error, and test it using the problem of two-dimensional (2D) granular Couette flow, a densely packed system that has been well characterized by previous work. We find that this method can successfully replicate the results of previous work up to the point of jamming, and that it can do so a factor of 10 faster than comparable MD methods.
NASA Technical Reports Server (NTRS)
Noble, Viveca K.
1993-01-01
There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.
ERIC Educational Resources Information Center
Lai, Hsin-Chih; Chang, Chun-Yen; Li, Wen-Shiane; Fan, Yu-Lin; Wu, Ying-Tien
2013-01-01
This study presents an m-learning method that incorporates Integrated Quick Response (QR) codes. This learning method not only achieves the objectives of outdoor education, but it also increases applications of Cognitive Theory of Multimedia Learning (CTML) (Mayer, 2001) in m-learning for practical use in a diverse range of outdoor locations. When…
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.; Chen, Xiang; Zhang, Ning-Tian
1988-01-01
The use of formal numerical optimization methods for the design of gears is investigated. To achieve this, computer codes were developed for the analysis of spur gears and spiral bevel gears. These codes calculate the life, dynamic load, bending strength, surface durability, gear weight and size, and various geometric parameters. It is necessary to calculate all such important responses because they all represent competing requirements in the design process. The codes developed here were written in subroutine form and coupled to the COPES/ADS general purpose optimization program. This code allows the user to define the optimization problem at the time of program execution. Typical design variables include face width, number of teeth and diametral pitch. The user is free to choose any calculated response as the design objective to minimize or maximize and may impose lower and upper bounds on any calculated responses. Typical examples include life maximization with limits on dynamic load, stress, weight, etc. or minimization of weight subject to limits on life, dynamic load, etc. The research codes were written in modular form for easy expansion and so that they could be combined to create a multiple reduction optimization capability in future.
Robust coding over noisy overcomplete channels.
Doi, Eizaburo; Balcan, Doru C; Lewicki, Michael S
2007-02-01
We address the problem of robust coding in which the signal information should be preserved in spite of intrinsic noise in the representation. We present a theoretical analysis for 1- and 2-D cases and characterize the optimal linear encoder and decoder in the mean-squared error sense. Our analysis allows for an arbitrary number of coding units, thus including both under- and over-complete representations, and provides insights into optimal coding strategies. In particular, we show how the form of the code adapts to the number of coding units and to different data and noise conditions in order to achieve robustness. We also present numerical solutions of robust coding for high-dimensional image data, demonstrating that these codes are substantially more robust than other linear image coding methods such as PCA, ICA, and wavelets.
Combined trellis coding with asymmetric modulations
NASA Technical Reports Server (NTRS)
Divsalar, D.; Simon, M. K.
1985-01-01
The use of asymmetric signal constellations combined with optimized trellis coding to improve the performance of coded systems without increasing the average or peak power, or changing the bandwidth constraints of a system is discussed. The trellis code, asymmetric signal set, and Viterbi decoder of the system model are examined. The procedures for assigning signals to state transitions of the trellis code are described; the performance of the trellis coding system is evaluated. Examples of AM, QAM, and MPSK modulations with short memory trellis codes are presented.
Efficient calculation of atomic rate coefficients in dense plasmas
NASA Astrophysics Data System (ADS)
Aslanyan, Valentin; Tallents, Greg J.
2017-03-01
Modelling electron statistics in a cold, dense plasma by the Fermi-Dirac distribution leads to complications in the calculations of atomic rate coefficients. The Pauli exclusion principle slows down the rate of collisions as electrons must find unoccupied quantum states and adds a further computational cost. Methods to calculate these coefficients by direct numerical integration with a high degree of parallelism are presented. This degree of optimization allows the effects of degeneracy to be incorporated into a time-dependent collisional-radiative model. Example results from such a model are presented.
High accuracy and visibility-consistent dense multiview stereo.
Vu, Hoang-Hiep; Labatut, Patrick; Pons, Jean-Philippe; Keriven, Renaud
2012-05-01
Since the initial comparison of Seitz et al., the accuracy of dense multiview stereovision methods has been increasing steadily. A number of limitations, however, make most of these methods not suitable to outdoor scenes taken under uncontrolled imaging conditions. The present work consists of a complete dense multiview stereo pipeline which circumvents these limitations, being able to handle large-scale scenes without sacrificing accuracy. Highly detailed reconstructions are produced within very reasonable time thanks to two key stages in our pipeline: a minimum s-t cut optimization over an adaptive domain that robustly and efficiently filters a quasidense point cloud from outliers and reconstructs an initial surface by integrating visibility constraints, followed by a mesh-based variational refinement that captures small details, smartly handling photo-consistency, regularization, and adaptive resolution. The pipeline has been tested over a wide range of scenes: from classic compact objects taken in a laboratory setting, to outdoor architectural scenes, landscapes, and cultural heritage sites. The accuracy of its reconstructions has also been measured on the dense multiview benchmark proposed by Strecha et al., showing the results to compare more than favorably with the current state-of-the-art methods.
Barr, W.L.; Bathke, C.G.; Brooks, J.N.; Bulmer, R.H.; Busigin, A.; DuBois, P.F.; Fenstermacher, M.E.; Fink, J.; Finn, P.A.; Galambos, J.D.; Gohar, Y.; Gorker, G.E.; Haines, J.R.; Hassanein, A.M.; Hicks, D.R.; Ho, S.K.; Kalsi, S.S.; Kalyanam, K.M.; Kerns, J.A.; Lee, J.D.; Miller, J.R.; Miller, R.L.; Myall, J.O.; Peng, Y-K.M.; Perkins, L.J.; Spampinato, P.T.; Strickler, D.J.; Thomson, S.L.; Wagner, C.E.; Willms, R.S.; Reid, R.L.
1988-04-01
A tokamak systems code capable of modeling experimental test reactors has been developed and is described in this document. The code, named TETRA (for Tokamak Engineering Test Reactor Analysis), consists of a series of modules, each describing a tokamak system or component, controlled by an optimizer/driver. This code development was a national effort in that the modules were contributed by members of the fusion community and integrated into a code by the Fusion Engineering Design Center. The code has been checked out on the Cray computers at the National Magnetic Fusion Energy Computing Center and has satisfactorily simulated the Tokamak Ignition/Burn Experimental Reactor II (TIBER) design. A feature of this code is the ability to perform optimization studies through the use of a numerical software package, which iterates prescribed variables to satisfy a set of prescribed equations or constraints. This code will be used to perform sensitivity studies for the proposed International Thermonuclear Experimental Reactor (ITER). 22 figs., 29 tabs.
HERCULES: A Pattern Driven Code Transformation System
Kartsaklis, Christos; Hernandez, Oscar R; Hsu, Chung-Hsing; Ilsche, Thomas; Joubert, Wayne; Graham, Richard L
2012-01-01
New parallel computers are emerging, but developing efficient scientific code for them remains difficult. A scientist must manage not only the science-domain complexity but also the performance-optimization complexity. HERCULES is a code transformation system designed to help the scientist to separate the two concerns, which improves code maintenance, and facilitates performance optimization. The system combines three technologies, code patterns, transformation scripts and compiler plugins, to provide the scientist with an environment to quickly implement code transformations that suit his needs. Unlike existing code optimization tools, HERCULES is unique in its focus on user-level accessibility. In this paper we discuss the design, implementation and an initial evaluation of HERCULES.
Dynamical theory of dense groups of galaxies
NASA Technical Reports Server (NTRS)
Mamon, Gary A.
1990-01-01
It is well known that galaxies associate in groups and clusters. Perhaps 40% of all galaxies are found in groups of 4 to 20 galaxies (e.g., Tully 1987). Although most groups appear to be so loose that the galaxy interactions within them ought to be insignificant, the apparently densest groups, known as compact groups appear so dense when seen in projection onto the plane of the sky that their members often overlap. These groups thus appear as dense as the cores of rich clusters. The most popular catalog of compact groups, compiled by Hickson (1982), includes isolation among its selection critera. Therefore, in comparison with the cores of rich clusters, Hickson's compact groups (HCGs) appear to be the densest isolated regions in the Universe (in galaxies per unit volume), and thus provide in principle a clean laboratory for studying the competition of very strong gravitational interactions. The $64,000 question here is then: Are compact groups really bound systems as dense as they appear? If dense groups indeed exist, then one expects that each of the dynamical processes leading to the interaction of their member galaxies should be greatly enhanced. This leads us to the questions: How stable are dense groups? How do they form? And the related question, fascinating to any theorist: What dynamical processes predominate in dense groups of galaxies? If HCGs are not bound dense systems, but instead 1D change alignments (Mamon 1986, 1987; Walke & Mamon 1989) or 3D transient cores (Rose 1979) within larger looser systems of galaxies, then the relevant question is: How frequent are chance configurations within loose groups? Here, the author answers these last four questions after comparing in some detail the methods used and the results obtained in the different studies of dense groups.
Magnetic Phases in Dense Quark Matter
Incera, Vivian de la
2007-10-26
In this paper I discuss the magnetic phases of the three-flavor color superconductor. These phases can take place at different field strengths in a highly dense quark system. Given that the best natural candidates for the realization of color superconductivity are the extremely dense cores of neutron stars, which typically have very large magnetic fields, the magnetic phases here discussed could have implications for the physics of these compact objects.
Dissociation energy of molecules in dense gases
NASA Technical Reports Server (NTRS)
Kunc, J. A.
1992-01-01
A general approach is presented for calculating the reduction of the dissociation energy of diatomic molecules immersed in a dense (n = less than 10 exp 22/cu cm) gas of molecules and atoms. The dissociation energy of a molecule in a dense gas differs from that of the molecule in vacuum because the intermolecular forces change the intramolecular dynamics of the molecule, and, consequently, the energy of the molecular bond.
METHOD OF PRODUCING DENSE CONSOLIDATED METALLIC REGULUS
Magel, T.T.
1959-08-11
A methcd is presented for reducing dense metal compositions while simultaneously separating impurities from the reduced dense metal and casting the reduced parified dense metal, such as uranium, into well consolidated metal ingots. The reduction is accomplished by heating the dense metallic salt in the presence of a reducing agent, such as an alkali metal or alkaline earth metal in a bomb type reacting chamber, while applying centrifugal force on the reacting materials. Separation of the metal from the impurities is accomplished essentially by the incorporation of a constricted passageway at the vertex of a conical reacting chamber which is in direct communication with a collecting chamber. When a centrifugal force is applled to the molten metal and slag from the reduction in a direction collinear with the axis of the constricted passage, the dense molten metal is forced therethrough while the less dense slag is retained within the reaction chamber, resulting in a simultaneous separation of the reduced molten metal from the slag and a compacting of the reduced metal in a homogeneous mass.
Computational experience with a dense column feature for interior-point methods
Wenzel, M.; Czyzyk, J.; Wright, S.
1997-08-01
Most software that implements interior-point methods for linear programming formulates the linear algebra at each iteration as a system of normal equations. This approach can be extremely inefficient when the constraint matrix has dense columns, because the density of the normal equations matrix is much greater than the constraint matrix and the system is expensive to solve. In this report the authors describe a more efficient approach for this case, that involves handling the dense columns by using a Schur-complement method and conjugate gradient interaction. The authors report numerical results with the code PCx, into which the technique now has been incorporated.
NASA Astrophysics Data System (ADS)
Ilhan, Z.; Wehner, W. P.; Schuster, E.; Boyer, M. D.; Gates, D. A.; Gerhardt, S.; Menard, J.
2015-11-01
Active control of the toroidal current density profile is crucial to achieve and maintain high-performance, MHD-stable plasma operation in NSTX-U. A first-principles-driven, control-oriented model describing the temporal evolution of the current profile has been proposed earlier by combining the magnetic diffusion equation with empirical correlations obtained at NSTX-U for the electron density, electron temperature, and non-inductive current drives. A feedforward + feedback control scheme for the requlation of the current profile is constructed by embedding the proposed nonlinear, physics-based model into the control design process. Firstly, nonlinear optimization techniques are used to design feedforward actuator trajectories that steer the plasma to a desired operating state with the objective of supporting the traditional trial-and-error experimental process of advanced scenario planning. Secondly, a feedback control algorithm to track a desired current profile evolution is developed with the goal of adding robustness to the overall control scheme. The effectiveness of the combined feedforward + feedback control algorithm for current profile regulation is tested in predictive simulations carried out in TRANSP. Supported by PPPL.
Evolution of Dense Gas with Starburst Age: When Star Formation Versus Dense Gas Relations Break Down
NASA Astrophysics Data System (ADS)
Meier, David S.; Turner, J. L.; Schinnerer, E.
2011-05-01
Dense gas correlates well with star formation on kpc scales. On smaller scales, motions of individual clouds become comparable to the 100 Myr ages of starbursts. One then expects the star formation rate vs. dense gas relations to break down on giant molecular cloud scales. We exploit this to study the evolutionary history of nuclear starburst in the nearby spiral, IC 342. Maps of the J=5-4 and 16-15 transitions of dense gas tracer HC3N at 20 pc resolution made with the VLA and the Plateau de Bure interferometer are presented. The 5-4 line of HC3N traces very dense gas in the cold phase, while the 16-15 transition traces warm, dense gas. These reveal changes in dense cloud structure on scales of 30 pc among clouds with star formation histories differing by only a few Myrs. HC3N emission does not correlate well with young star formation at these high spatial resolutions, but gas excitation does. The cold, dense gas extends well beyond the starburst region implying large amounts of dense quiescent gas not yet actively forming stars. Close to the starburst the high excitation combined with faint emission indicates that the immediate (30 pc) vicinity of the starburst lacks large masses of very dense gas and has high dense gas star formation efficiencies. The dense gas appears to be in pressure equilibrium with the starburst. We propose a scenario where the starburst is being caught in the act of dispersing or destroying the dense gas in the presence of the expanding HII region. This work is supported by the NSF through NRAO and grant AST-1009620.
MHD modeling of dense plasma focus electrode shape variation
NASA Astrophysics Data System (ADS)
McLean, Harry; Hartman, Charles; Schmidt, Andrea; Tang, Vincent; Link, Anthony; Ellsworth, Jen; Reisman, David
2013-10-01
The dense plasma focus (DPF) is a very simple device physically, but results to date indicate that very extensive physics is needed to understand the details of operation, especially during the final pinch where kinetic effects become very important. Nevertheless, the overall effects of electrode geometry, electrode size, and drive circuit parameters can be informed efficiently using MHD fluid codes, especially in the run-down phase before the final pinch. These kinds of results can then guide subsequent, more detailed fully kinetic modeling efforts. We report on resistive 2-d MHD modeling results applying the TRAC-II code to the DPF with an emphasis on varying anode and cathode shape. Drive circuit variations are handled in the code using a self-consistent circuit model for the external capacitor bank since the device impedance is strongly coupled to the internal plasma physics. Electrode shape is characterized by the ratio of inner diameter to outer diameter, length to diameter, and various parameterizations for tapering. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Accessibillity of Electron Bernstein Modes in Over-Dense Plasma
Batchelor, D.B.; Bigelow, T.S.; Carter, M.D.
1999-04-12
Mode-conversion between the ordinary, extraordinary and electron Bernstein modes near the plasma edge may allow signals generated by electrons in an over-dense plasma to be detected. Alternatively, high frequency power may gain accessibility to the core plasma through this mode conversion process. Many of the tools used for ion cyclotron antenna de-sign can also be applied near the electron cyclotron frequency. In this paper, we investigate the possibilities for an antenna that may couple to electron Bernstein modes inside an over-dense plasma. The optimum values for wavelengths that undergo mode-conversion are found by scanning the poloidal and toroidal response of the plasma using a warm plasma slab approximation with a sheared magnetic field. Only a very narrow region of the edge can be examined in this manner; however, ray tracing may be used to follow the mode converted power in a more general geometry. It is eventually hoped that the methods can be extended to a hot plasma representation. Using antenna design codes, some basic antenna shapes will be considered to see what types of antennas might be used to detect or launch modes that penetrate the cutoff layer in the edge plasma.
Modeling the Spectra of Dense Hydrogen Plasmas: Beyond Occupation Probability
NASA Astrophysics Data System (ADS)
Gomez, T. A.; Montgomery, M. H.; Nagayama, T.; Kilcrease, D. P.; Winget, D. E.
2017-03-01
Accurately measuring the masses of white dwarf stars is crucial in many astrophysical contexts (e.g., asteroseismology and cosmochronology). These masses are most commonly determined by fitting a model atmosphere to an observed spectrum; this is known as the spectroscopic method. However, for cases in which more than one method may be employed, there are well known discrepancies between masses determined by the spectroscopic method and those determined by astrometric, dynamical, and/or gravitational-redshift methods. In an effort to resolve these discrepancies, we are developing a new model of hydrogen in a dense plasma that is a significant departure from previous models. Experiments at Sandia National Laboratories are currently underway to validate these new models, and we have begun modifications to incorporate these models into stellar-atmosphere codes.
A secure and efficient entropy coding based on arithmetic coding
NASA Astrophysics Data System (ADS)
Li, Hengjian; Zhang, Jiashu
2009-12-01
A novel security arithmetic coding scheme based on nonlinear dynamic filter (NDF) with changeable coefficients is proposed in this paper. The NDF is employed to generate the pseudorandom number generator (NDF-PRNG) and its coefficients are derived from the plaintext for higher security. During the encryption process, the mapping interval in each iteration of arithmetic coding (AC) is decided by both the plaintext and the initial values of NDF, and the data compression is also achieved with entropy optimality simultaneously. And this modification of arithmetic coding methodology which also provides security is easy to be expanded into the most international image and video standards as the last entropy coding stage without changing the existing framework. Theoretic analysis and numerical simulations both on static and adaptive model show that the proposed encryption algorithm satisfies highly security without loss of compression efficiency respect to a standard AC or computation burden.
NASA Technical Reports Server (NTRS)
Hinds, Erold W. (Principal Investigator)
1996-01-01
This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.
Molecular defects that affect platelet dense granules.
Gunay-Aygun, Meral; Huizing, Marjan; Gahl, William A
2004-10-01
Platelet dense granules form using mechanisms shared by melanosomes in melanocytes and by subsets of lysosomes in more generalized cells. Consequently, disorders of platelet dense granules can reveal how organelles form and move within cells. Models for the study of new vesicle formation include isolated delta-storage pool deficiency, combined alphadelta-storage pool deficiency, Hermansky-Pudlak syndrome (HPS), Chediak-Higashi syndrome, Griscelli syndrome, thrombocytopenia absent radii syndrome, and Wiskott-Aldrich syndrome. The molecular bases of dense granule deficiency are known for the seven subtypes of HPS, as well as for Chediak-Higashi syndrome, Griscelli syndrome, and Wiskott-Aldrich syndrome. The gene products involved in these disorders help elucidate the generalized process of the formation of vesicles from extant membranes such as the Golgi.
Coalescence preference in densely packed microbubbles
Kim, Yeseul; Lim, Su Jin; Gim, Bopil; Weon, Byung Mook
2015-01-13
A bubble merged from two parent bubbles with different size tends to be placed closer to the larger parent. This phenomenon is known as the coalescence preference. Here we demonstrate that the coalescence preference can be blocked inside a densely packed cluster of bubbles. We utilized high-speed high-resolution X-ray microscopy to clearly visualize individual coalescence events inside densely packed microbubbles with a local packing fraction of ~40%. Thus, the surface energy release theory predicts an exponent of 5 in a relation between the relative coalescence position and the parent size ratio, whereas our observation for coalescence in densely packed microbubbles shows a different exponent of 2. We believe that this result would be important to understand the reality of coalescence dynamics in a variety of packing situations of soft matter.
Coalescence preference in densely packed microbubbles
Kim, Yeseul; Lim, Su Jin; Gim, Bopil; ...
2015-01-13
A bubble merged from two parent bubbles with different size tends to be placed closer to the larger parent. This phenomenon is known as the coalescence preference. Here we demonstrate that the coalescence preference can be blocked inside a densely packed cluster of bubbles. We utilized high-speed high-resolution X-ray microscopy to clearly visualize individual coalescence events inside densely packed microbubbles with a local packing fraction of ~40%. Thus, the surface energy release theory predicts an exponent of 5 in a relation between the relative coalescence position and the parent size ratio, whereas our observation for coalescence in densely packed microbubblesmore » shows a different exponent of 2. We believe that this result would be important to understand the reality of coalescence dynamics in a variety of packing situations of soft matter.« less
IR Spectroscopy of PAHs in Dense Clouds
NASA Astrophysics Data System (ADS)
Allamandola, Louis; Bernstein, Max; Mattioda, Andrew; Sandford, Scott
2007-05-01
Interstellar PAHs are likely to be a component of the ice mantles that form on dust grains in dense molecular clouds. PAHs frozen in grain mantles will produce IR absorption bands, not IR emission features. A couple of very weak absorption features in ground based spectra of a few objects embedded in dense clouds may be due to PAHs. Additionally spaceborne observations in the 5 to 8 ?m region, the region in which PAH spectroscopy is rich, reveal unidentified new bands and significant variation from object to object. It has not been possible to properly evaluate the contribution of PAH bands to these IR observations because the laboratory absorption spectra of PAHs condensed in realistic interstellar mixed-molecular ice analogs is lacking. This experimental data is necessary to interpret observations because, in ice mantles, the interaction of PAHs with the surrounding molecules effects PAH IR band positions, widths, profiles, and intrinsic strengths. Furthermore, PAHs are readily ionized in pure H2O ice, further altering the PAH spectrum. This laboratory proposal aims to remedy the situation by studying the IR spectroscopy of PAHs frozen in laboratory ice analogs that realistically reflect the composition of the interstellar ices observed in dense clouds. The purpose is to provide laboratory spectra which can be used to interpret IR observations. We will measure the spectra of these mixed molecular ices containing PAHs before and after ionization and determine the intrinsic band strengths of neutral and ionized PAHs in these ice analogs. This will enable a quantitative assessment of the role that PAHs can play in determining the 5-8 ?m spectrum of dense clouds and will directly address the following two fundamental questions associated with dense cloud spectroscopy and chemistry: 1- Can PAHs be detected in dense clouds? 2- Are PAH ions components of interstellar ice?
Fast temperature relaxation model in dense plasmas
NASA Astrophysics Data System (ADS)
Faussurier, Gérald; Blancard, Christophe
2017-01-01
We present a fast model to calculate the temperature-relaxation rates in dense plasmas. The electron-ion interaction-potential is calculated by combining a Yukawa approach and a finite-temperature Thomas-Fermi model. We include the internal energy as well as the excess energy of ions using the QEOS model. Comparisons with molecular dynamics simulations and calculations based on an average-atom model are presented. This approach allows the study of the temperature relaxation in a two-temperature electron-ion system in warm and hot dense matter.
Superfluid vortices in dense quark matter
NASA Astrophysics Data System (ADS)
Mallavarapu, S. Kumar; Alford, Mark; Windisch, Andreas; Vachaspati, Tanmay
2016-03-01
Superfluid vortices in the color-flavor-locked (CFL) phase of dense quark matter are known to be energetically disfavored as compared to well-separated triplets of ``semi-superfluid'' color flux tubes. In this talk we will provide results which will identify regions in parameter space where the superfluid vortex spontaneously decays. We will also discuss the nature of the mode that is responsible for the decay of a superfluid vortex in dense quark matter. We will conclude by mentioning the implications of our results to neutron stars.
Demagnetization effects in dense nanoparticle assemblies
NASA Astrophysics Data System (ADS)
Normile, P. S.; Andersson, M. S.; Mathieu, R.; Lee, S. S.; Singh, G.; De Toro, J. A.
2016-10-01
We highlight the relevance of demagnetizing-field corrections in the characterization of dense magnetic nanoparticle assemblies. By an analysis that employs in-plane and out-of-plane magnetometry on cylindrical assemblies, we demonstrate the suitability of a simple analytical formula-based correction method. This allows us to identify artifacts of the demagnetizing field in temperature-dependent susceptibility curves (e.g., shoulder peaks in curves from a disordered assembly of essentially bare magnetic nanoparticles). The same analysis approach is shown to be a straightforward procedure for determining the magnetic nanoparticle packing fraction in dense, disordered assemblies.
MACRAD: A mass analysis code for radiators
Gallup, D.R.
1988-01-01
A computer code to estimate and optimize the mass of heat pipe radiators (MACRAD) is currently under development. A parametric approach is used in MACRAD, which allows the user to optimize radiator mass based on heat pipe length, length to diameter ratio, vapor to wick radius, radiator redundancy, etc. Full consideration of the heat pipe operating parameters, material properties, and shielding requirements is included in the code. Preliminary results obtained with MACRAD are discussed.
Applications of Coding in Network Communications
ERIC Educational Resources Information Center
Chang, Christopher SungWook
2012-01-01
This thesis uses the tool of network coding to investigate fast peer-to-peer file distribution, anonymous communication, robust network construction under uncertainty, and prioritized transmission. In a peer-to-peer file distribution system, we use a linear optimization approach to show that the network coding framework significantly simplifies…
Preparation of a dense, polycrystalline ceramic structure
Cooley, Jason; Chen, Ching-Fong; Alexander, David
2010-12-07
Ceramic nanopowder was sealed inside a metal container under a vacuum. The sealed evacuated container was forced through a severe deformation channel at an elevated temperature below the melting point of the ceramic nanopowder. The result was a dense nanocrystalline ceramic structure inside the metal container.
Coalescence preference in dense packing of bubbles
NASA Astrophysics Data System (ADS)
Kim, Yeseul; Gim, Bopil; Gim, Bopil; Weon, Byung Mook
2015-11-01
Coalescence preference is the tendency that a merged bubble from the contact of two original bubbles (parent) tends to be near to the bigger parent. Here, we show that the coalescence preference can be blocked by densely packing of neighbor bubbles. We use high-speed high-resolution X-ray microscopy to clearly visualize individual coalescence phenomenon which occurs in micro scale seconds and inside dense packing of microbubbles with a local packing fraction of ~40%. Previous theory and experimental evidence predict a power of -5 between the relative coalescence position and the parent size. However, our new observation for coalescence preference in densely packed microbubbles shows a different power of -2. We believe that this result may be important to understand coalescence dynamics in dense packing of soft matter. This work (NRF-2013R1A22A04008115) was supported by Mid-career Researcher Program through NRF grant funded by the MEST and also was supported by Ministry of Science, ICT and Future Planning (2009-0082580) and by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry and Education, Science and Technology (NRF-2012R1A6A3A04039257).
Dense peripheral corneal clouding in Scheie syndrome.
Summers, C G; Whitley, C B; Holland, E J; Purple, R L; Krivit, W
1994-05-01
A 28-year-old woman with Scheie syndrome (MPS I-S) presented with the unusual feature of extremely dense peripheral corneal clouding, allowing maintenance of good central visual acuity. Characteristic systemic features, an abnormal electroretinogram result, and absent alpha-L-iduronidase activity confirmed the diagnosis despite the unusual corneal pattern of clouding.
DENSE NONAQUEOUS PHASE LIQUIDS -- A WORKSHOP SUMMARY
site characterization, and, therefore, DNAPL remediation, can be expected. Dense nonaqueous phase liquids (DNAPLs) in the subsurface are long-term sources of ground-water contamination, and may persist for centuries before dissolving completely in adjacent ground water. In respo...
Flexure modelling at seamounts with dense cores
NASA Astrophysics Data System (ADS)
Kim, Seung-Sep; Wessel, Paul
2010-08-01
The lithospheric response to seamounts and ocean islands has been successfully described by deformation of an elastic plate induced by a given volcanic load. If the shape and mass of a seamount are known, the lithospheric flexure due to the seamount is determined by the thickness of an elastic plate, Te, which depends on the load density and the age of the plate at the time of seamount construction. We can thus infer important thermomechanical properties of the lithosphere from Te estimates at seamounts and their correlation with other geophysical inferences, such as cooling of the plate. Whereas the bathymetry (i.e. shape) of a seamount is directly observable, the total mass often requires an assumption of the internal seamount structure. The conventional approach considers the seamount to have a uniform density (e.g. density of the crust). This choice, however, tends to bias the total mass acting on an elastic plate. In this study, we will explore a simple approximation to the seamount's internal structure that considers a dense core and a less dense outer edifice. Although the existence of a core is supported by various gravity and seismic studies, the role of such volcanic cores in flexure modelling has not been fully addressed. Here, we present new analytic solutions for plate flexure due to axisymmetric dense core loads, and use them to examine the effects of dense cores in flexure calculations for a variety of synthetic cases. Comparing analytic solutions with and without a core indicates that the flexure model with uniform density underestimates Te by at least 25 per cent. This bias increases when the uniform density is taken to be equal to the crustal density. We also propose a practical application of the dense core model by constructing a uniform density load of same mass as the dense core load. This approximation allows us to compute the flexural deflection and gravity anomaly of a seamount in the wavenumber domain and minimize the limitations
Monte Carlo simulations of ionization potential depression in dense plasmas
Stransky, M.
2016-01-15
A particle-particle grand canonical Monte Carlo model with Coulomb pair potential interaction was used to simulate modification of ionization potentials by electrostatic microfields. The Barnes-Hut tree algorithm [J. Barnes and P. Hut, Nature 324, 446 (1986)] was used to speed up calculations of electric potential. Atomic levels were approximated to be independent of the microfields as was assumed in the original paper by Ecker and Kröll [Phys. Fluids 6, 62 (1963)]; however, the available levels were limited by the corresponding mean inter-particle distance. The code was tested on hydrogen and dense aluminum plasmas. The amount of depression was up to 50% higher in the Debye-Hückel regime for hydrogen plasmas, in the high density limit, reasonable agreement was found with the Ecker-Kröll model for hydrogen plasmas and with the Stewart-Pyatt model [J. Stewart and K. Pyatt, Jr., Astrophys. J. 144, 1203 (1966)] for aluminum plasmas. Our 3D code is an improvement over the spherically symmetric simplifications of the Ecker-Kröll and Stewart-Pyatt models and is also not limited to high atomic numbers as is the underlying Thomas-Fermi model used in the Stewart-Pyatt model.
Monte Carlo simulations of ionization potential depression in dense plasmas
NASA Astrophysics Data System (ADS)
Stransky, M.
2016-01-01
A particle-particle grand canonical Monte Carlo model with Coulomb pair potential interaction was used to simulate modification of ionization potentials by electrostatic microfields. The Barnes-Hut tree algorithm [J. Barnes and P. Hut, Nature 324, 446 (1986)] was used to speed up calculations of electric potential. Atomic levels were approximated to be independent of the microfields as was assumed in the original paper by Ecker and Kröll [Phys. Fluids 6, 62 (1963)]; however, the available levels were limited by the corresponding mean inter-particle distance. The code was tested on hydrogen and dense aluminum plasmas. The amount of depression was up to 50% higher in the Debye-Hückel regime for hydrogen plasmas, in the high density limit, reasonable agreement was found with the Ecker-Kröll model for hydrogen plasmas and with the Stewart-Pyatt model [J. Stewart and K. Pyatt, Jr., Astrophys. J. 144, 1203 (1966)] for aluminum plasmas. Our 3D code is an improvement over the spherically symmetric simplifications of the Ecker-Kröll and Stewart-Pyatt models and is also not limited to high atomic numbers as is the underlying Thomas-Fermi model used in the Stewart-Pyatt model.
Superfluidity and vortices in dense quark matter
NASA Astrophysics Data System (ADS)
Mallavarapu, Satyanarayana Kumar
This dissertation will elucidate specific features of superfluid behavior in dense quark matter, It will start with issues regarding spontaneous decay of superfluid vortices in dense quark matter. This will be followed by topics that explain superfluid phenomena from field theoretical viewpoint. In particular the first part of the dissertation will talk about superfluid vortices in the color-flavor-locked (CFL) phase of dense quark matter which are known to be energetically disfavored as compared to well-separated triplets of "semi-superfluid" color flux tubes. In this talk we will provide results which will identify regions in parameter space where the superfluid vortex spontaneously decays. We will also discuss the nature of the mode that is responsible for the decay of a superfluid vortex in dense quark matter. We will conclude by mentioning the implications of our results to neutron stars. In the field theoretic formulation of a zero-temperature superfluid one connects the superfluid four-velocity which is a macroscopic observable with a microscopic field variable namely the gradient of the phase of a Bose-Condensed scalar field. On the other hand, a superfluid at nonzero temperatures is usually described in terms of a two-fluid model: the superfluid and the normal fluid. In the later part of the dissertation we offer a deeper understanding of the two-fluid model by deriving it from an underlying microscopic field theory. In particular, we shall obtain the macroscopic properties of a uniform, dissipationless superfluid at low temperatures and weak coupling within the framework of a ϕ 4 model. Though our study is very general, it may also be viewed as a step towards understanding the superfluid properties of various phases of dense nuclear and quark matter in the interior of compact star.
Chemical Dense Gas Modeling in Cities
NASA Astrophysics Data System (ADS)
Brown, M. J.; Williams, M. D.; Nelson, M. A.; Streit, G. E.
2007-12-01
Many industrial facilities have on-site storage of chemicals and are within a few kilometers of residential population. Chemicals are transported around the country via trains and trucks and often go through populated areas on their journey. Many of the chemicals, like chlorine and phosgene, are toxic and when released into the air are heavier-than-air dense gases that hug the ground and result in high airborne concentrations at breathing level. There is considerable concern about the vulnerability of these stored and transported chemicals to terrorist attack and the impact a release could have on highly-populated urban areas. There is the possibility that the impacts of a dense gas release within a city would be exacerbated since the buildings might act to trap the toxic cloud at street level and channel it over a large area down side streets. However, no one is quite sure what will happen for a release in cities since there is a dearth of experimental data. There are a number of fast-running dense gas models used in the air pollution and emergency response community, but there are none that account for the complex flow fields and turbulence generated by buildings. As part of this presentation, we will discuss current knowledge regarding dense gas releases around buildings and other obstacles. We will present information from wind tunnel and field experiments, as well as computational fluid dynamics modeling. We will also discuss new fast response modeling efforts which are trying to account for dense gas transport and dispersion in cities.
a Novel Removal Method for Dense Stripes in Remote Sensing Images
NASA Astrophysics Data System (ADS)
Liu, Xinxin; Shen, Huanfeng; Yuan, Qiangqiang; Zhang, Liangpei; Cheng, Qing
2016-06-01
In remote sensing images, the common existing stripe noise always severely affects the imaging quality and limits the related subsequent application, especially when it is with high density. To well process the dense striped data and ensure a reliable solution, we construct a statistical property based constraint in our proposed model and use it to control the whole destriping process. The alternating direction method of multipliers (ADMM) is applied in this work to solve and accelerate the model optimization. Experimental results on real data with different kinds of dense stripe noise demonstrate the effectiveness of the proposed method in terms of both qualitative and quantitative perspectives.
Dense image registration through MRFs and efficient linear programming.
Glocker, Ben; Komodakis, Nikos; Tziritas, Georgios; Navab, Nassir; Paragios, Nikos
2008-12-01
In this paper, we introduce a novel and efficient approach to dense image registration, which does not require a derivative of the employed cost function. In such a context, the registration problem is formulated using a discrete Markov random field objective function. First, towards dimensionality reduction on the variables we assume that the dense deformation field can be expressed using a small number of control points (registration grid) and an interpolation strategy. Then, the registration cost is expressed using a discrete sum over image costs (using an arbitrary similarity measure) projected on the control points, and a smoothness term that penalizes local deviations on the deformation field according to a neighborhood system on the grid. Towards a discrete approach, the search space is quantized resulting in a fully discrete model. In order to account for large deformations and produce results on a high resolution level, a multi-scale incremental approach is considered where the optimal solution is iteratively updated. This is done through successive morphings of the source towards the target image. Efficient linear programming using the primal dual principles is considered to recover the lowest potential of the cost function. Very promising results using synthetic data with known deformations and real data demonstrate the potentials of our approach.
A novel double patterning approach for 30nm dense holes
NASA Astrophysics Data System (ADS)
Hsu, Dennis Shu-Hao; Wang, Walter; Hsieh, Wei-Hsien; Huang, Chun-Yen; Wu, Wen-Bin; Shih, Chiang-Lin; Shih, Steven
2011-04-01
Double Patterning Technology (DPT) was commonly accepted as the major workhorse beyond water immersion lithography for sub-38nm half-pitch line patterning before the EUV production. For dense hole patterning, classical DPT employs self-aligned spacer deposition and uses the intersection of horizontal and vertical lines to define the desired hole patterns. However, the increase in manufacturing cost and process complexity is tremendous. Several innovative approaches have been proposed and experimented to address the manufacturing and technical challenges. A novel process of double patterned pillars combined image reverse will be proposed for the realization of low cost dense holes in 30nm node DRAM. The nature of pillar formation lithography provides much better optical contrast compared to the counterpart hole patterning with similar CD requirements. By the utilization of a reliable freezing process, double patterned pillars can be readily implemented. A novel image reverse process at the last stage defines the hole patterns with high fidelity. In this paper, several freezing processes for the construction of the double patterned pillars were tested and compared, and 30nm double patterning pillars were demonstrated successfully. A variety of different image reverse processes will be investigated and discussed for their pros and cons. An economic approach with the optimized lithography performance will be proposed for the application of 30nm DRAM node.
Texture-Aware Dense Image Matching Using Ternary Census Transform
NASA Astrophysics Data System (ADS)
Hu, Han; Chen, Chongtai; Wu, Bo; Yang, Xiaoxia; Zhu, Qing; Ding, Yulin
2016-06-01
Textureless and geometric discontinuities are major problems in state-of-the-art dense image matching methods, as they can cause visually significant noise and the loss of sharp features. Binary census transform is one of the best matching cost methods but in textureless areas, where the intensity values are similar, it suffers from small random noises. Global optimization for disparity computation is inherently sensitive to parameter tuning in complex urban scenes, and must compromise between smoothness and discontinuities. The aim of this study is to provide a method to overcome these issues in dense image matching, by extending the industry proven Semi-Global Matching through 1) developing a ternary census transform, which takes three outputs in a single order comparison and encodes the results in two bits rather than one, and also 2) by using texture-information to self-tune the parameters, which both preserves sharp edges and enforces smoothness when necessary. Experimental results using various datasets from different platforms have shown that the visual qualities of the triangulated point clouds in urban areas can be largely improved by these proposed methods.
Efficiently dense hierarchical graphene based aerogel electrode for supercapacitors
NASA Astrophysics Data System (ADS)
Wang, Xin; Lu, Chengxing; Peng, Huifen; Zhang, Xin; Wang, Zhenkun; Wang, Gongkai
2016-08-01
Boosting gravimetric and volumetric capacitances simultaneously at a high rate is still a discrepancy in development of graphene based supercapacitors. We report the preparation of dense hierarchical graphene/activated carbon composite aerogels via a reduction induced self-assembly process coupled with a drying post treatment. The compact and porous structures of composite aerogels could be maintained. The drying post treatment has significant effects on increasing the packing density of aerogels. The introduced activated carbons play the key roles of spacers and bridges, mitigating the restacking of adjacent graphene nanosheets and connecting lateral and vertical graphene nanosheets, respectively. The optimized aerogel with a packing density of 0.67 g cm-3 could deliver maximum gravimetric and volumetric capacitances of 128.2 F g-1 and 85.9 F cm-3, respectively, at a current density of 1 A g-1 in aqueous electrolyte, showing no apparent degradation to the specific capacitance at a current density of 10 A g-1 after 20000 cycles. The corresponding gravimetric and volumetric capacitances of 116.6 F g-1 and 78.1 cm-3 with an acceptable cyclic stability are also achieved in ionic liquid electrolyte. The results show a feasible strategy of designing dense hierarchical graphene based aerogels for supercapacitors.
Diagnostic Coding for Epilepsy.
Williams, Korwyn; Nuwer, Marc R; Buchhalter, Jeffrey R
2016-02-01
Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.
ERIC Educational Resources Information Center
New Mexico Univ., Albuquerque. American Indian Law Center.
The Model Children's Code was developed to provide a legally correct model code that American Indian tribes can use to enact children's codes that fulfill their legal, cultural and economic needs. Code sections cover the court system, jurisdiction, juvenile offender procedures, minor-in-need-of-care, and termination. Almost every Code section is…
Phylogeny of genetic codes and punctuation codes within genetic codes.
Seligmann, Hervé
2015-03-01
Punctuation codons (starts, stops) delimit genes, reflect translation apparatus properties. Most codon reassignments involve punctuation. Here two complementary approaches classify natural genetic codes: (A) properties of amino acids assigned to codons (classical phylogeny), coding stops as X (A1, antitermination/suppressor tRNAs insert unknown residues), or as gaps (A2, no translation, classical stop); and (B) considering only punctuation status (start, stop and other codons coded as -1, 0 and 1 (B1); 0, -1 and 1 (B2, reflects ribosomal translational dynamics); and 1, -1, and 0 (B3, starts/stops as opposites)). All methods separate most mitochondrial codes from most nuclear codes; Gracilibacteria consistently cluster with metazoan mitochondria; mitochondria co-hosted with chloroplasts cluster with nuclear codes. Method A1 clusters the euplotid nuclear code with metazoan mitochondria; A2 separates euplotids from mitochondria. Firmicute bacteria Mycoplasma/Spiroplasma and Protozoan (and lower metazoan) mitochondria share codon-amino acid assignments. A1 clusters them with mitochondria, they cluster with the standard genetic code under A2: constraints on amino acid ambiguity versus punctuation-signaling produced the mitochondrial versus bacterial versions of this genetic code. Punctuation analysis B2 converges best with classical phylogenetic analyses, stressing the need for a unified theory of genetic code punctuation accounting for ribosomal constraints.
Sparse coding based feature representation method for remote sensing images
NASA Astrophysics Data System (ADS)
Oguslu, Ender
In this dissertation, we study sparse coding based feature representation method for the classification of multispectral and hyperspectral images (HSI). The existing feature representation systems based on the sparse signal model are computationally expensive, requiring to solve a convex optimization problem to learn a dictionary. A sparse coding feature representation framework for the classification of HSI is presented that alleviates the complexity of sparse coding through sub-band construction, dictionary learning, and encoding steps. In the framework, we construct the dictionary based upon the extracted sub-bands from the spectral representation of a pixel. In the encoding step, we utilize a soft threshold function to obtain sparse feature representations for HSI. Experimental results showed that a randomly selected dictionary could be as effective as a dictionary learned from optimization. The new representation usually has a very high dimensionality requiring a lot of computational resources. In addition, the spatial information of the HSI data has not been included in the representation. Thus, we modify the framework by incorporating the spatial information of the HSI pixels and reducing the dimension of the new sparse representations. The enhanced model, called sparse coding based dense feature representation (SC-DFR), is integrated with a linear support vector machine (SVM) and a composite kernels SVM (CKSVM) classifiers to discriminate different types of land cover. We evaluated the proposed algorithm on three well known HSI datasets and compared our method to four recently developed classification methods: SVM, CKSVM, simultaneous orthogonal matching pursuit (SOMP) and image fusion and recursive filtering (IFRF). The results from the experiments showed that the proposed method can achieve better overall and average classification accuracies with a much more compact representation leading to more efficient sparse models for HSI classification. To further
IR Spectroscopy of PANHs in Dense Clouds
NASA Astrophysics Data System (ADS)
Allamandola, Louis; Mattioda, Andrew; Sandford, Scott
2008-03-01
Interstellar PAHs are likely to be frozen into ice mantles on dust grains in dense clouds. These PAHs will produce IR absorption bands, not emission features. A couple of very weak absorption features in ground based spectra of a few objects in dense clouds may be due to PAHs. It is now thought that aromatic molecules in which N atoms are substituted for a few of the C atoms in a PAH's hexagonal skeletal network (PANHs) may well be as abundant and ubiquitous throughout the interstellar medium as PAHs. Spaceborne observations in the 5 to 8 um region, the region in which PAH spectroscopy is rich, reveal unidentified new bands and significant variation from object to object. It is not possible to analyze these observations because lab spectra of PANHs and PAHs condensed in realistic interstellar ice analogs are lacking. This lab data is necessary to interpret observations because, in ice mantles, the surrounding molecules affect PANH and PAH IR band positions, widths, profiles, and intrinsic strengths. Further, PAHs (and PANHs?) are readily ionized in pure H2O ice, further altering the spectrum. This proposal starts to address this situation by studying the IR spectra of PANHs frozen in laboratory ice analogs that reflect the composition of the interstellar ices observed in dense clouds. Thanks to Spitzer Cycle-4 support, we are now measuring the spectra of PAHs in interstellar ice analogs to provide laboratory spectra that can be used to interpret IR observations. Here we propose to extend this work to PANHs. We will measure the spectra of these interstellar ice analogs containing PANHs before and after ionization and determine the band strengths of neutral and ionized PANHs in these ices. This will enable a quantitative assessment of the role that PANHs can play in the 5-8 um spectrum of dense clouds and address the following two fundamental questions associated with dense cloud spectroscopy and chemistry: 1- Can PANHs be detected in dense clouds? 2- Are PANH ions
Fully kinetic simulations of megajoule-scale dense plasma focus
Schmidt, A.; Link, A.; Tang, V.; Halvorson, C.; May, M.; Welch, D.; Meehan, B. T.; Hagen, E. C.
2014-10-15
Dense plasma focus (DPF) Z-pinch devices are sources of copious high energy electrons and ions, x-rays, and neutrons. Megajoule-scale DPFs can generate 10{sup 12} neutrons per pulse in deuterium gas through a combination of thermonuclear and beam-target fusion. However, the details of the neutron production are not fully understood and past optimization efforts of these devices have been largely empirical. Previously, we reported on the first fully kinetic simulations of a kilojoule-scale DPF and demonstrated that both kinetic ions and kinetic electrons are needed to reproduce experimentally observed features, such as charged-particle beam formation and anomalous resistivity. Here, we present the first fully kinetic simulation of a MegaJoule DPF, with predicted ion and neutron spectra, neutron anisotropy, neutron spot size, and time history of neutron production. The total yield predicted by the simulation is in agreement with measured values, validating the kinetic model in a second energy regime.
NASA Astrophysics Data System (ADS)
Bandhu, Ashutosh Vishwa; Aggarwal, Neha; Sengupta, Supratim
2013-12-01
The origin of the genetic code marked a major transition from a plausible RNA world to the world of DNA and proteins and is an important milestone in our understanding of the origin of life. We examine the efficacy of the physico-chemical hypothesis of code origin by carrying out simulations of code-sequence coevolution in finite populations in stages, leading first to the emergence of ten amino acid code(s) and subsequently to 14 amino acid code(s). We explore two different scenarios of primordial code evolution. In one scenario, competition occurs between populations of equilibrated code-sequence sets while in another scenario; new codes compete with existing codes as they are gradually introduced into the population with a finite probability. In either case, we find that natural selection between competing codes distinguished by differences in the degree of physico-chemical optimization is unable to explain the structure of the standard genetic code. The code whose structure is most consistent with the standard genetic code is often not among the codes that have a high fixation probability. However, we find that the composition of the code population affects the code fixation probability. A physico-chemically optimized code gets fixed with a significantly higher probability if it competes against a set of randomly generated codes. Our results suggest that physico-chemical optimization may not be the sole driving force in ensuring the emergence of the standard genetic code.
NASA Astrophysics Data System (ADS)
Basurto, Luis
This project consists of performing upgrades to the massively parallel NRLMOL electronic structure code in order to enhance its performance by increasing its flexibility by: a) Utilizing dynamically allocated arrays, b) Executing in a parallel environment sections of the program that were previously executed in a serial mode, c) Exploring simultaneous concurrent executions of the program through the use of an already existing MPI environment; thus enabling the simulation of larger systems than it is currently capable of performing. Also developed was a graphical user interface that will allow less experienced users to start performing electronic structure calculations by aiding them in performing the necessary configuration of input files as well as providing graphical tools for the displaying and analysis of results. Additionally, a computational toolkit that can avail of large supercomputers and make use of various levels of approximation for atomic interactions was developed to search for stable atomic clusters and predict novel stable endohedral fullerenes. As an application of the developed computational toolkit, a search was conducted for stable isomers of Sc3N C80 fullerene. In this search, about 1.2 million isomers of C80 were optimized in various charged states at the PM6 level. Subsequently, using the selected optimized isomers of C80 in various charged state, about 10,000 isomers of Sc3N C80 were constructed which were optimized using semi-empirical PM6 quantum chemical method. A few selected lowest isomers of Sc3N C80 were optimized at the DFT level. The calculation confirms the lowest 3 isomers previously reported in literature but 4 new isomers are found within the lowest 10 isomers. Using the upgraded NRLMOL code, a study was done of the electronic structure of a multichromoric molecular complex containing two of each borondipyrromethane dye, Zn-tetraphenyl-porphyrin, bisphenyl anthracene and a fullerene. A systematic examination of the effect of
Topological Surface States in Dense Solid Hydrogen.
Naumov, Ivan I; Hemley, Russell J
2016-11-11
Metallization of dense hydrogen and associated possible high-temperature superconductivity represents one of the key problems of physics. Recent theoretical studies indicate that before becoming a good metal, compressed solid hydrogen passes through a semimetallic stage. We show that such semimetallic phases predicted to be the most stable at multimegabar (∼300 GPa) pressures are not conventional semimetals: they exhibit topological metallic surface states inside the bulk "direct" gap in the two-dimensional surface Brillouin zone; that is, metallic surfaces may appear even when the bulk of the material remains insulating. Examples include hydrogen in the Cmca-12 and Cmca-4 structures; Pbcn hydrogen also has metallic surface states but they are of a nontopological nature. The results provide predictions for future measurements, including probes of possible surface superconductivity in dense hydrogen.
PHOTOCHEMICAL HEATING OF DENSE MOLECULAR GAS
Glassgold, A. E.; Najita, J. R.
2015-09-10
Photochemical heating is analyzed with an emphasis on the heating generated by chemical reactions initiated by the products of photodissociation and photoionization. The immediate products are slowed down by collisions with the ambient gas and then heat the gas. In addition to this direct process, heating is also produced by the subsequent chemical reactions initiated by these products. Some of this chemical heating comes from the kinetic energy of the reaction products and the rest from collisional de-excitation of the product atoms and molecules. In considering dense gas dominated by molecular hydrogen, we find that the chemical heating is sometimes as large, if not much larger than, the direct heating. In very dense gas, the total photochemical heating approaches 10 eV per photodissociation (or photoionization), competitive with other ways of heating molecular gas.
Impacts by Compact Ultra Dense Objects
NASA Astrophysics Data System (ADS)
Birrell, Jeremey; Labun, Lance; Rafelski, Johann
2012-03-01
We propose to search for nuclear density or greater compact ultra dense objects (CUDOs), which could constitute a significant fraction of the dark matter [1]. Considering their high density, the gravitational tidal forces are significant and atomic-density matter cannot stop an impacting CUDO, which punctures the surface of the target body, pulverizing, heating and entraining material near its trajectory through the target [2]. Because impact features endure over geologic timescales, the Earth, Moon, Mars, Mercury and large asteroids are well-suited to act as time-integrating CUDO detectors. There are several potential candidates for CUDO structure such as strangelet fragments or more generally dark matter if mechanisms exist for it to form compact objects. [4pt] [1] B. J. Carr, K. Kohri, Y. Sendouda, & J.'i. Yokoyama, Phys. Rev. D81, 104019 (2010). [0pt] [2] L. Labun, J. Birrell, J. Rafelski, Solar System Signatures of Impacts by Compact Ultra Dense Objects, arXiv:1104.4572.
The kinetic chemistry of dense interstellar clouds
NASA Technical Reports Server (NTRS)
Graedel, T. E.; Langer, W. D.; Frerking, M. A.
1982-01-01
A model of the time-dependent chemistry of dense interstellar clouds is formulated to study the dominant chemical processes in carbon and oxygen isotope fractionation, the formation of nitrogen-containing molecules, and the evolution of product molecules as a function of cloud density and temperature. The abundances of the dominant isotopes of the carbon- and oxygen-bearing molecules are calculated. The chemical abundances are found to be quite sensitive to electron concentration since the electron concentration determines the ratio of H3(+) to He(+), and the electron density is strongly influenced by the metals abundance. For typical metal abundances and for H2 cloud density not less than 10,000 molecules/cu cm, nearly all carbon exists as CO at late cloud ages. At high cloud density, many aspects of the chemistry are strongly time dependent. Finally, model calculations agree well with abundances deduced from observations of molecular line emission in cold dense clouds.
Hydrodynamic stellar interactions in dense star clusters
NASA Technical Reports Server (NTRS)
Rasio, Frederic A.
1993-01-01
Highly detailed HST observations of globular-cluster cores and galactic nuclei motivate new theoretical studies of the violent dynamical processes which govern the evolution of these very dense stellar systems. These processes include close stellar encounters and direct physical collisions between stars. Such hydrodynamic stellar interactions are thought to explain the large populations of blue stragglers, millisecond pulsars, X-ray binaries, and other peculiar sources observed in globular clusters. Three-dimensional hydrodynamics techniques now make it possible to perform realistic numerical simulations of these interactions. The results, when combined with those of N-body simulations of stellar dynamics, should provide for the first time a realistic description of dense star clusters. Here I review briefly current theoretical work on hydrodynamic stellar interactions, emphasizing its relevance to recent observations.
Active fluidization in dense glassy systems.
Mandal, Rituparno; Bhuyan, Pranab Jyoti; Rao, Madan; Dasgupta, Chandan
2016-07-20
Dense soft glasses show strong collective caging behavior at sufficiently low temperatures. Using molecular dynamics simulations of a model glass former, we show that the incorporation of activity or self-propulsion, f0, can induce cage breaking and fluidization, resulting in the disappearance of the glassy phase beyond a critical f0. The diffusion coefficient crosses over from being strongly to weakly temperature dependent as f0 is increased. In addition, we demonstrate that activity induces a crossover from a fragile to a strong glass and a tendency of active particles to cluster. Our results are of direct relevance to the collective dynamics of dense active colloidal glasses and to recent experiments on tagged particle diffusion in living cells.
Topological Surface States in Dense Solid Hydrogen
NASA Astrophysics Data System (ADS)
Naumov, Ivan I.; Hemley, Russell J.
2016-11-01
Metallization of dense hydrogen and associated possible high-temperature superconductivity represents one of the key problems of physics. Recent theoretical studies indicate that before becoming a good metal, compressed solid hydrogen passes through a semimetallic stage. We show that such semimetallic phases predicted to be the most stable at multimegabar (˜300 GPa ) pressures are not conventional semimetals: they exhibit topological metallic surface states inside the bulk "direct" gap in the two-dimensional surface Brillouin zone; that is, metallic surfaces may appear even when the bulk of the material remains insulating. Examples include hydrogen in the Cmca-12 and Cmca-4 structures; Pbcn hydrogen also has metallic surface states but they are of a nontopological nature. The results provide predictions for future measurements, including probes of possible surface superconductivity in dense hydrogen.
Structures for dense, crack free thin films
Jacobson, Craig P.; Visco, Steven J.; De Jonghe, Lutgard C.
2011-03-08
The process described herein provides a simple and cost effective method for making crack free, high density thin ceramic film. The steps involve depositing a layer of a ceramic material on a porous or dense substrate. The deposited layer is compacted and then the resultant laminate is sintered to achieve a higher density than would have been possible without the pre-firing compaction step.
Oxygen ion-conducting dense ceramic
Balachandran, Uthamalingam; Kleefisch, Mark S.; Kobylinski, Thaddeus P.; Morissette, Sherry L.; Pei, Shiyou
1998-01-01
Preparation, structure, and properties of mixed metal oxide compositions and their uses are described. Mixed metal oxide compositions of the invention have stratified crystalline structure identifiable by means of powder X-ray diffraction patterns. In the form of dense ceramic membranes, the present compositions demonstrate an ability to separate oxygen selectively from a gaseous mixture containing oxygen and one or more other volatile components by means of ionic conductivities.
Shear dispersion in dense granular flows
Christov, Ivan C.; Stone, Howard A.
2014-04-18
We formulate and solve a model problem of dispersion of dense granular materials in rapid shear flow down an incline. The effective dispersivity of the depth-averaged concentration of the dispersing powder is shown to vary as the Péclet number squared, as in classical Taylor–Aris dispersion of molecular solutes. An extension to generic shear profiles is presented, and possible applications to industrial and geological granular flows are noted.
Stellar interactions in dense and sparse star clusters
NASA Astrophysics Data System (ADS)
Olczak, C.; Pfalzner, S.; Eckart, A.
2010-01-01
Context. Stellar encounters potentially affect the evolution of the protoplanetary discs in the Orion Nebula Cluster (ONC). However, the role of encounters in other cluster environments is less known. Aims: We investigate the effect of the encounter-induced disc-mass loss in different cluster environments. Methods: Starting from an ONC-like cluster we vary the cluster size and density to determine the correlation of the collision time scale and disc-mass loss. We use the nbody6++ code to model the dynamics of these clusters and analyse the disc-mass loss due to encounters. Results: We find that the encounter rate strongly depends on the cluster density but remains rather unaffected by the size of the stellar population. This dependency translates directly into the effect on the encounter-induced disc-mass loss. The essential outcome of the simulations are: i) even in clusters of four times lower density than the ONC, the effect of encounters is still apparent; ii) the density of the ONC itself marks a threshold: in less dense and less massive clusters it is the massive stars that dominate the encounter-induced disc-mass loss, whereas in denser and more massive clusters the low-mass stars play the major role for the disc-mass removal. Conclusions: It seems that in the central regions of young dense star clusters - the common sites of star formation - stellar encounters do affect the evolution of the protoplanetary discs. With higher cluster density low-mass stars become more heavily involved in this process. These results can also be applied to extreme stellar systems: in the case of the Arches cluster one would expect stellar encounters to destroy the discs of most of the low- and high-mass stars in several hundred thousand years, whereas intermediate mass stars are able to retain their discs to some extent even under these harsh environmental conditions.
Dense spray evaporation as a mixing process
NASA Astrophysics Data System (ADS)
de Rivas, A.; Villermaux, E.
2016-05-01
We explore the processes by which a dense set of small liquid droplets (a spray) evaporates in a dry, stirred gas phase. A dense spray of micron-sized liquid (water or ethanol) droplets is formed in air by a pneumatic atomizer in a closed chamber. The spray is conveyed in ambient air as a plume whose extension depends on the relative humidity of the diluting medium. Standard shear instabilities develop at the plume edge, forming the stretched lamellar structures familiar with passive scalars. Unlike passive scalars however, these lamellae vanish in a finite time, because individual droplets evaporate at their border in contact with the dry environment. Experiments demonstrate that the lifetime of an individual droplet embedded in a lamellae is much larger than expected from the usual d2 law describing the fate of a single drop evaporating in a quiescent environment. By analogy with the way mixing times are understood from the convection-diffusion equation for passive scalars, we show that the lifetime of a spray lamellae stretched at a constant rate γ is tv=1/γ ln(1/+ϕ ϕ ) , where ϕ is a parameter that incorporates the thermodynamic and diffusional properties of the vapor in the diluting phase. The case of time-dependent stretching rates is examined too. A dense spray behaves almost as a (nonconserved) passive scalar.
Dense Correspondences across Scenes and Scales.
Tau, Moria; Hassner, Tal
2016-05-01
We seek a practical method for establishing dense correspondences between two images with similar content, but possibly different 3D scenes. One of the challenges in designing such a system is the local scale differences of objects appearing in the two images. Previous methods often considered only few image pixels; matching only pixels for which stable scales may be reliably estimated. Recently, others have considered dense correspondences, but with substantial costs associated with generating, storing and matching scale invariant descriptors. Our work is motivated by the observation that pixels in the image have contexts-the pixels around them-which may be exploited in order to reliably estimate local scales. We make the following contributions. (i) We show that scales estimated in sparse interest points may be propagated to neighboring pixels where this information cannot be reliably determined. Doing so allows scale invariant descriptors to be extracted anywhere in the image. (ii) We explore three means for propagating this information: using the scales at detected interest points, using the underlying image information to guide scale propagation in each image separately, and using both images together. Finally, (iii), we provide extensive qualitative and quantitative results, demonstrating that scale propagation allows for accurate dense correspondences to be obtained even between very different images, with little computational costs beyond those required by existing methods.
Hybrid-Based Dense Stereo Matching
NASA Astrophysics Data System (ADS)
Chuang, T. Y.; Ting, H. W.; Jaw, J. J.
2016-06-01
Stereo matching generating accurate and dense disparity maps is an indispensable technique for 3D exploitation of imagery in the fields of Computer vision and Photogrammetry. Although numerous solutions and advances have been proposed in the literature, occlusions, disparity discontinuities, sparse texture, image distortion, and illumination changes still lead to problematic issues and await better treatment. In this paper, a hybrid-based method based on semi-global matching is presented to tackle the challenges on dense stereo matching. To ease the sensitiveness of SGM cost aggregation towards penalty parameters, a formal way to provide proper penalty estimates is proposed. To this end, the study manipulates a shape-adaptive cross-based matching with an edge constraint to generate an initial disparity map for penalty estimation. Image edges, indicating the potential locations of occlusions as well as disparity discontinuities, are approved by the edge drawing algorithm to ensure the local support regions not to cover significant disparity changes. Besides, an additional penalty parameter 𝑃𝑒 is imposed onto the energy function of SGM cost aggregation to specifically handle edge pixels. Furthermore, the final disparities of edge pixels are found by weighting both values derived from the SGM cost aggregation and the U-SURF matching, providing more reliable estimates at disparity discontinuity areas. Evaluations on Middlebury stereo benchmarks demonstrate satisfactory performance and reveal the potency of the hybrid-based dense stereo matching method.
The comparison of OPC performance and run time for dense versus sparse solutions
NASA Astrophysics Data System (ADS)
Abdo, Amr; Stobert, Ian; Viswanathan, Ramya; Burns, Ryan; Herold, Klaus; Kallingal, Chidam; Meiring, Jason; Oberschmidt, James; Mansfield, Scott
2008-03-01
The lithographic processes and resolution enhancement techniques (RET) needed to achieve pattern fidelity are becoming more complicated as the required critical dimensions (CDs) shrink. For technology nodes with smaller devices and tolerances, more complex models and proximity corrections are needed and these significantly increase the computational requirements. New simulation techniques are required to address these computational challenges. The new simulation technique we focus on in this work is dense optical proximity correction (OPC). Sparse OPC tools typically require a laborious, manual and time consuming OPC optimization approach. In contrast, dense OPC uses pixel-based simulation that does not need as much manual setup. Dense OPC was introduced because sparse simulation methodology causes run times to explode as the pattern density increases, since the number of simulation sites in a given optical radius increases. In this work, we completed a comparison of the OPC modeling performance and run time for the dense and the sparse solutions. The analysis found the computational run time to be highly design dependant. The result should lead to the improvement of the quality and performance of the OPC solution and shed light on the pros and cons of using dense versus sparse solution. This will help OPC engineers to decide which solution to apply to their particular situation.
Transmission of epi-alleles with MET1-dependent dense methylation in Arabidopsis thaliana.
Watson, Michael; Hawkes, Emily; Meyer, Peter
2014-01-01
DNA methylation in plants targets cytosines in three sequence contexts, CG, CHG and CHH (H representing A, C or T). Each of these patterns has traditionally been associated with distinct DNA methylation pathways with CHH methylation being controlled by the RNA dependent DNA methylation (RdDM) pathway employing small RNAs as a guide for the de novo DOMAINS REARRANGED METHYLTRANSFERASE (DRM2), and maintenance DNA METHYLTRANSFERASE1 (MET1) being responsible for faithful propagation of CG methylation. Here we report an unusual 'dense methylation' pattern under the control of MET1, with methylation in all three sequence contexts. We identified epi-alleles of dense methylation at a non coding RNA locus (At4g15242) in Arabidopsis ecotypes, with distinct dense methylation and expression characteristics, which are stably maintained and transmitted in genetic crosses and which can be heritably altered by depletion of MET1. This suggests that, in addition to its classical CG maintenance function, at certain loci MET1 plays a role in creating transcriptional diversity based on the generation of independent epi-alleles. Database inspection identified several other loci with MET1-dependent dense methylation patterns. Arabidopsis ecotypes contain distinct epi-alleles of these loci with expression patterns that inversely correlate with methylation density, predominantly within the transcribed region. In Arabidopsis, dense methylation appears to be an exception as it is only found at a small number of loci. Its presence does, however, highlight the potential for MET1 as a contributor to epigenetic diversity, and it will be interesting to investigate the representation of dense methylation in other plant species.
Mammogram: Can It Find Cancer in Dense Breasts?
... breasts. Breast tissue is composed of fatty (nondense) tissue and connective (dense) tissue. Women with dense breasts have more connective tissue than fatty tissue. About half of women undergoing ...
Accumulate repeat accumulate codes
NASA Technical Reports Server (NTRS)
Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung
2004-01-01
In this paper we propose an innovative channel coding scheme called 'Accumulate Repeat Accumulate codes' (ARA). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes, thus belief propagation can be used for iterative decoding of ARA codes on a graph. The structure of encoder for this class can be viewed as precoded Repeat Accumulate (RA) code or as precoded Irregular Repeat Accumulate (IRA) code, where simply an accumulator is chosen as a precoder. Thus ARA codes have simple, and very fast encoder structure when they representing LDPC codes. Based on density evolution for LDPC codes through some examples for ARA codes, we show that for maximum variable node degree 5 a minimum bit SNR as low as 0.08 dB from channel capacity for rate 1/2 can be achieved as the block size goes to infinity. Thus based on fixed low maximum variable node degree, its threshold outperforms not only the RA and IRA codes but also the best known LDPC codes with the dame maximum node degree. Furthermore by puncturing the accumulators any desired high rate codes close to code rate 1 can be obtained with thresholds that stay close to the channel capacity thresholds uniformly. Iterative decoding simulation results are provided. The ARA codes also have projected graph or protograph representation that allows for high speed decoder implementation.
Coset Codes Viewed as Terminated Convolutional Codes
NASA Technical Reports Server (NTRS)
Fossorier, Marc P. C.; Lin, Shu
1996-01-01
In this paper, coset codes are considered as terminated convolutional codes. Based on this approach, three new general results are presented. First, it is shown that the iterative squaring construction can equivalently be defined from a convolutional code whose trellis terminates. This convolutional code determines a simple encoder for the coset code considered, and the state and branch labelings of the associated trellis diagram become straightforward. Also, from the generator matrix of the code in its convolutional code form, much information about the trade-off between the state connectivity and complexity at each section, and the parallel structure of the trellis, is directly available. Based on this generator matrix, it is shown that the parallel branches in the trellis diagram of the convolutional code represent the same coset code C(sub 1), of smaller dimension and shorter length. Utilizing this fact, a two-stage optimum trellis decoding method is devised. The first stage decodes C(sub 1), while the second stage decodes the associated convolutional code, using the branch metrics delivered by stage 1. Finally, a bidirectional decoding of each received block starting at both ends is presented. If about the same number of computations is required, this approach remains very attractive from a practical point of view as it roughly doubles the decoding speed. This fact is particularly interesting whenever the second half of the trellis is the mirror image of the first half, since the same decoder can be implemented for both parts.
Protograph-Based Raptor-Like Codes
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Chen, Tsung-Yi; Wang, Jiadong; Wesel, Richard D.
2014-01-01
Theoretical analysis has long indicated that feedback improves the error exponent but not the capacity of pointto- point memoryless channels. The analytic and empirical results indicate that at short blocklength regime, practical rate-compatible punctured convolutional (RCPC) codes achieve low latency with the use of noiseless feedback. In 3GPP, standard rate-compatible turbo codes (RCPT) did not outperform the convolutional codes in the short blocklength regime. The reason is the convolutional codes for low number of states can be decoded optimally using Viterbi decoder. Despite excellent performance of convolutional codes at very short blocklengths, the strength of convolutional codes does not scale with the blocklength for a fixed number of states in its trellis.
Simulink Code Generation: Tutorial for Generating C Code from Simulink Models using Simulink Coder
NASA Technical Reports Server (NTRS)
MolinaFraticelli, Jose Carlos
2012-01-01
This document explains all the necessary steps in order to generate optimized C code from Simulink Models. This document also covers some general information on good programming practices, selection of variable types, how to organize models and subsystems, and finally how to test the generated C code and compare it with data from MATLAB.
Discussion on LDPC Codes and Uplink Coding
NASA Technical Reports Server (NTRS)
Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio
2007-01-01
This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.
ERIC Educational Resources Information Center
Rahn, Erwin
1984-01-01
Discusses the evolution of standards for bar codes (series of printed lines and spaces that represent numbers, symbols, and/or letters of alphabet) and describes the two types most frequently adopted by libraries--Code-A-Bar and CODE 39. Format of the codes is illustrated. Six references and definitions of terminology are appended. (EJS)
Manually operated coded switch
Barnette, Jon H.
1978-01-01
The disclosure relates to a manually operated recodable coded switch in which a code may be inserted, tried and used to actuate a lever controlling an external device. After attempting a code, the switch's code wheels must be returned to their zero positions before another try is made.
Trellis complexity bounds for decoding linear block codes
NASA Technical Reports Server (NTRS)
Kiely, A. B.; Dolinar, S.; Ekroot, L.; Mceliece, R. J.; Lin, W.
1995-01-01
We consider the problem of finding a trellis for a linear block code that minimizes one or more measures of trellis complexity. The domain of optimization may be different permutations of the same code or different codes with the same parameters. Constraints on trellises, including relationships between the minimal trellis of a code and that of the dual code, are used to derive bounds on complexity. We define a partial ordering on trellises: If a trellis is optimum with respect to this partial ordering, if has the desirable property that it simultaneously minimizes all of the complexity measures examined. We examine properties of such optimal trellises and give examples of optimal permutations of codes, most notably the (48,24,12) quadratic residue code.
NASA Astrophysics Data System (ADS)
Oikonomou, Th.; Provata, A.
2006-03-01
We study the primary DNA structure of four of the most completely sequenced human chromosomes (including chromosome 19 which is the most dense in coding), using non-extensive statistics. We show that the exponents governing the spatial decay of the coding size distributions vary between 5.2 ≤r ≤5.7 for the short scales and 1.45 ≤q ≤1.50 for the large scales. On the contrary, the exponents governing the spatial decay of the non-coding size distributions in these four chromosomes, take the values 2.4 ≤r ≤3.2 for the short scales and 1.50 ≤q ≤1.72 for the large scales. These results, in particular the values of the tail exponent q, indicate the existence of correlations in the coding and non-coding size distributions with tendency for higher correlations in the non-coding DNA.
Quantum molecular dynamics simulations of transport properties in liquid and dense-plasma plutonium.
Kress, J D; Cohen, James S; Kilcrease, D P; Horner, D A; Collins, L A
2011-02-01
We have calculated the viscosity and self-diffusion coefficients of plutonium in the liquid phase using quantum molecular dynamics (QMD) and in the dense-plasma phase using orbital-free molecular dynamics (OFMD), as well as in the intermediate warm dense matter regime with both methods. Our liquid metal results for viscosity are about 40% lower than measured experimentally, whereas a previous calculation using an empirical interatomic potential (modified embedded-atom method) obtained results 3-4 times larger than the experiment. The QMD and OFMD results agree well at the intermediate temperatures. The calculations in the dense-plasma regime for temperatures from 50 to 5000 eV and densities about 1-5 times ambient are compared with the one-component plasma (OCP) model, using effective charges given by the average-atom code INFERNO. The INFERNO-OCP model results agree with the OFMD to within about a factor of 2, except for the viscosity at temperatures less than about 100 eV, where the disagreement is greater. A Stokes-Einstein relationship of the viscosities and diffusion coefficients is found to hold fairly well separately in both the liquid and dense-plasma regimes.
Quantum molecular dynamics simulations of transport properties in liquid and dense-plasma plutonium
NASA Astrophysics Data System (ADS)
Kress, J. D.; Cohen, James S.; Kilcrease, D. P.; Horner, D. A.; Collins, L. A.
2011-02-01
We have calculated the viscosity and self-diffusion coefficients of plutonium in the liquid phase using quantum molecular dynamics (QMD) and in the dense-plasma phase using orbital-free molecular dynamics (OFMD), as well as in the intermediate warm dense matter regime with both methods. Our liquid metal results for viscosity are about 40% lower than measured experimentally, whereas a previous calculation using an empirical interatomic potential (modified embedded-atom method) obtained results 3-4 times larger than the experiment. The QMD and OFMD results agree well at the intermediate temperatures. The calculations in the dense-plasma regime for temperatures from 50 to 5000 eV and densities about 1-5 times ambient are compared with the one-component plasma (OCP) model, using effective charges given by the average-atom code inferno. The inferno-OCP model results agree with the OFMD to within about a factor of 2, except for the viscosity at temperatures less than about 100 eV, where the disagreement is greater. A Stokes-Einstein relationship of the viscosities and diffusion coefficients is found to hold fairly well separately in both the liquid and dense-plasma regimes.
Multishock Compression Properties of Warm Dense Argon
Zheng, Jun; Chen, Qifeng; Yunjun, Gu; Li, Zhiguo; Shen, Zhijun
2015-01-01
Warm dense argon was generated by a shock reverberation technique. The diagnostics of warm dense argon were performed by a multichannel optical pyrometer and a velocity interferometer system. The equations of state in the pressure-density range of 20–150 GPa and 1.9–5.3 g/cm3 from the first- to fourth-shock compression were presented. The single-shock temperatures in the range of 17.2–23.4 kK were obtained from the spectral radiance. Experimental results indicates that multiple shock-compression ratio (ηi = ρi/ρ0) is greatly enhanced from 3.3 to 8.8, where ρ0 is the initial density of argon and ρi (i = 1, 2, 3, 4) is the compressed density from first to fourth shock, respectively. For the relative compression ratio (ηi’ = ρi/ρi-1), an interesting finding is that a turning point occurs at the second shocked states under the conditions of different experiments, and ηi’ increases with pressure in lower density regime and reversely decreases with pressure in higher density regime. The evolution of the compression ratio is controlled by the excitation of internal degrees of freedom, which increase the compression, and by the interaction effects between particles that reduce it. A temperature-density plot shows that current multishock compression states of argon have distributed into warm dense regime. PMID:26515505
Dense gas in low-metallicity galaxies
NASA Astrophysics Data System (ADS)
Braine, J.; Shimajiri, Y.; André, P.; Bontemps, S.; Gao, Yu; Chen, Hao; Kramer, C.
2017-01-01
Stars form out of the densest parts of molecular clouds. Far-IR emission can be used to estimate the star formation rate (SFR) and high dipole moment molecules, typically HCN, trace the dense gas. A strong correlation exists between HCN and far-IR emission, with the ratio being nearly constant, over a large range of physical scales. A few recent observations have found HCN to be weak with respect to the far-IR and CO in subsolar metallicity (low-Z) objects. We present observations of the Local Group galaxies M 33, IC 10, and NGC 6822 with the IRAM 30 m and NRO 45 m telescopes, greatly improving the sample of low-Z galaxies observed. HCN, HCO+, CS, C2H, and HNC have been detected. Compared to solar metallicity galaxies, the nitrogen-bearing species are weak (HCN, HNC) or not detected (CN, HNCO, N2H+) relative to far-IR or CO emission. HCO+ and C2H emission is normal with respect to CO and far-IR. While 13CO is the usual factor 10 weaker than 12CO, C18O emission was not detected down to very low levels. Including earlier data, we find that the HCN/HCO+ ratio varies with metallicity (O/H) and attribute this to the sharply decreasing nitrogen abundance. The dense gas fraction, traced by the HCN/CO and HCO+/CO ratios, follows the SFR but in the low-Z objects the HCO+ is much easier to measure. Combined with larger and smaller scale measurements, the HCO+ line appears to be an excellent tracer of dense gas and varies linearly with the SFR for both low and high metallicities.
Grain Growth and Silicates in Dense Clouds
NASA Technical Reports Server (NTRS)
Pendeleton, Yvonne J.; Chiar, J. E.; Ennico, K.; Boogert, A.; Greene, T.; Knez, C.; Lada, C.; Roellig, T.; Tielens, A.; Werner, M.; Whittet, D.
2006-01-01
Interstellar silicates are likely to be a part of all grains responsible for visual extinction (Av) in the diffuse interstellar medium (ISM) and dense clouds. A correlation between Av and the depth of the 9.7 micron silicate feature (measured as optical depth, tau(9.7)) is expected if the dust species are well 'mixed. In the di&se ISM, such a correlation is observed for lines of sight in the solar neighborhood. A previous study of the silicate absorption feature in the Taurus dark cloud showed a tendency for the correlation to break down at high Av (Whittet et al. 1988, MNRAS, 233,321), but the scatter was large. We have acquired Spitzer Infrared Spectrograph data of several lines of sight in the IC 5 146, Barnard 68, Chameleon I and Serpens dense clouds. Our data set spans an Av range between 2 and 35 magnitudes. All lines of sight show the 9.7 micron silicate feature. The Serpens data appear to follow the diffuse ISM correlation line whereas the data for the other clouds show a non-linear correlation between the depth of the silicate feature relative to Av, much like the trend observed in the Taurus data. In fact, it appears that for visual extinctions greater than about 10 mag, tau(9.7) begins to level off. This decrease in the growth of the depth of the 9.7 micron feature with increasing Av could indicate the effects of grain growth in dense clouds. In this poster, we explore the possibility that grain growth causes an increase in opacity (Av) without causing a corresponding increase in tau(9.7).
ERIC Educational Resources Information Center
Crompton, Helen; LaFrance, Jason; van 't Hooft, Mark
2012-01-01
A QR (quick-response) code is a two-dimensional scannable code, similar in function to a traditional bar code that one might find on a product at the supermarket. The main difference between the two is that, while a traditional bar code can hold a maximum of only 20 digits, a QR code can hold up to 7,089 characters, so it can contain much more…
NASA Technical Reports Server (NTRS)
Divsalar, Dariush (Inventor); Abbasfar, Aliazam (Inventor); Jones, Christopher R. (Inventor); Dolinar, Samuel J. (Inventor); Thorpe, Jeremy C. (Inventor); Andrews, Kenneth S. (Inventor); Yao, Kung (Inventor)
2008-01-01
An apparatus and method for encoding low-density parity check codes. Together with a repeater, an interleaver and an accumulator, the apparatus comprises a precoder, thus forming accumulate-repeat-accumulate (ARA codes). Protographs representing various types of ARA codes, including AR3A, AR4A and ARJA codes, are described. High performance is obtained when compared to the performance of current repeat-accumulate (RA) or irregular-repeat-accumulate (IRA) codes.
Resolving Ultrafast Heating of Dense Cryogenic Hydrogen
NASA Astrophysics Data System (ADS)
Zastrau, U.; Sperling, P.; Harmand, M.; Becker, A.; Bornath, T.; Bredow, R.; Dziarzhytski, S.; Fennel, T.; Fletcher, L. B.; Förster, E.; Göde, S.; Gregori, G.; Hilbert, V.; Hochhaus, D.; Holst, B.; Laarmann, T.; Lee, H. J.; Ma, T.; Mithen, J. P.; Mitzner, R.; Murphy, C. D.; Nakatsutsumi, M.; Neumayer, P.; Przystawik, A.; Roling, S.; Schulz, M.; Siemer, B.; Skruszewicz, S.; Tiggesbäumker, J.; Toleikis, S.; Tschentscher, T.; White, T.; Wöstmann, M.; Zacharias, H.; Döppner, T.; Glenzer, S. H.; Redmer, R.
2014-03-01
We report on the dynamics of ultrafast heating in cryogenic hydrogen initiated by a ≲300 fs, 92 eV free electron laser x-ray burst. The rise of the x-ray scattering amplitude from a second x-ray pulse probes the transition from dense cryogenic molecular hydrogen to a nearly uncorrelated plasmalike structure, indicating an electron-ion equilibration time of ˜0.9 ps. The rise time agrees with radiation hydrodynamics simulations based on a conductivity model for partially ionized plasma that is validated by two-temperature density-functional theory.
Dense optical-electrical interface module
Paul Chang
2000-12-21
The DOIM (Dense Optical-electrical Interface Modules) is a custom-designed optical data transmission module employed in the upgrade of Silicon Vertex Detector of CDF experiment at Fermilab. Each DOIM module consists of a transmitter (TX) converting electrical differential input signals to optical outputs, a middle segment of jacketed fiber ribbon cable, and a receiver (RX) which senses the light inputs and converts them back to electrical signals. The targeted operational frequency is 53 MHz, and higher rate is achievable. This article outlines the design goals, implementation methods, production test results, and radiation hardness tests of these modules.
Flavour Oscillations in Dense Baryonic Matter
NASA Astrophysics Data System (ADS)
Filip, Peter
2017-01-01
We suggest that fast neutral meson oscillations may occur in a dense baryonic matter, which can influence the balance of s/¯s quarks in the nucleus-nucleus and proton-nucleus interactions, if primordial multiplicities of neutral K 0, mesons are sufficiently asymmetrical. The phenomenon can occur even if CP symmetry is fully conserved, and it may be responsible for the enhanced sub-threshold production of multi-strange hyperons observed in the low-energy A+A and p+A interactions.
Electrical and thermal conductivities in dense plasmas
Faussurier, G. Blancard, C.; Combis, P.; Videau, L.
2014-09-15
Expressions for the electrical and thermal conductivities in dense plasmas are derived combining the Chester-Thellung-Kubo-Greenwood approach and the Kramers approximation. The infrared divergence is removed assuming a Drude-like behaviour. An analytical expression is obtained for the Lorenz number that interpolates between the cold solid-state and the hot plasma phases. An expression for the electrical resistivity is proposed using the Ziman-Evans formula, from which the thermal conductivity can be deduced using the analytical expression for the Lorenz number. The present method can be used to estimate electrical and thermal conductivities of mixtures. Comparisons with experiment and quantum molecular dynamics simulations are done.
Gravity-driven dense granular flows
ERTAS,DENIZ; GREST,GARY S.; HALSEY,THOMAS C.; DEVINE,DOV; SILBERT,LEONARDO E.
2000-03-29
The authors report and analyze the results of numerical studies of dense granular flows in two and three dimensions, using both linear damped springs and Hertzian force laws between particles. Chute flow generically produces a constant density profile that satisfies scaling relations suggestive of a Bagnold grain inertia regime. The type for force law has little impact on the behavior of the system. Failure is not initiated at the surface, consistent with the absence of surface flows and different principal stress directions at vs. below the surface.
Laser Sheet Dropsizing of dense sprays
NASA Astrophysics Data System (ADS)
Le Gal, P.; Farrugia, N.; Greenhalgh, D. A.
1999-02-01
A new technique has been developed that produces instantaneous or time-averaged two-dimensional images of Sauter Mean Diameter from a spray. Laser Sheet Dropsizing (LSD) combines elastic and inelastic light scattered from a laser sheet. Compared with Phase Doppler Anemometry (PDA), the new technique offers advantages in increased spatial and temporal resolution and more rapid spray characterisation. Moreover, the technique can also be applied to dense sprays. Successful implementation requires careful calibration, particularly of the effect of dye concentration on the dropsize dependence of the inelastic scattered light.
Molecular dynamics simulations of dense plasmas
Collins, L.A.; Kress, J.D.; Kwon, I.; Lynch, D.L.; Troullier, N.
1993-12-31
We have performed quantum molecular dynamics simulations of hot, dense plasmas of hydrogen over a range of temperatures(0.1-5eV) and densities(0.0625-5g/cc). We determine the forces quantum mechanically from density functional, extended Huckel, and tight binding techniques and move the nuclei according to the classical equations of motion. We determine pair-correlation functions, diffusion coefficients, and electrical conductivities. We find that many-body effects predominate in this regime. We begin to obtain agreement with the OCP and Thomas-Fermi models only at the higher temperatures and densities.
NASA Astrophysics Data System (ADS)
Kim, Yongok; Kong, Gyuyeol; Choi, Sooyong
2012-08-01
An error correcting capable 2/4 modulation code scheme is proposed in holographic data storage (HDS) systems. We adopt trellis coded modulation (TCM) to obtain a good error correcting capability without loss of data rate in the HDS systems. To overcome loss of a data rate caused by a 1/2-rate convolutional code, we extend a 2/4 modulation code set the 3/4 modulation code set as high-order modulation in the proposed scheme. Additionally, we find an optimal mapping method of index numbers to maximize the free distance on the trellis and calculate the free distance according to each constraint length. The simulation results show that the proposed scheme with the same data rate has about 4 dB coding gains compared to the conventional 2/4 modulation coding scheme.
Characterising the Dense Molecular Gas in Exceptional Local Galaxies
NASA Astrophysics Data System (ADS)
Tunnard, Richard C. A.
2016-08-01
The interferometric facilities now coming online (the Atacama Large Millimetre Array (ALMA) and the NOrthern Extended Millimeter Array (NOEMA)) and those planned for the coming decade (the Next Generation Very Large Array (ngVLA) and the Square Kilometre Array (SKA)) in the radio to sub-millimetre regimes are opening a window to the molecular gas in high-redshift galaxies. However, our understanding of similar galaxies in the local universe is still far from complete and the data analysis techniques and tools needed to interpret the observations in consistent and comparable ways are yet to be developed. I first describe the Monte Carlo Markov Chain (MCMC) script developed to empower a public radiative transfer code. I characterise both the public code and MCMC script, including an exploration of the effect of observing molecular lines at high redshift where the Cosmic Microwave Background (CMB) can provide a significant background, as well as the effect this can have on well-known local correlations. I present two studies of ultraluminous infrared galaxies (ULIRGs) in the local universe making use of literature and collaborator data. In the first of these, NGC6240, I use the wealth of available data and the geometry of the source to develop a multi-phase, multi-species model, finding evidence for a complex medium of hot diffuse and cold dense gas in pressure equilibrium. Next, I study the prototypical ULIRG Arp 220; an extraordinary galaxy rendered especially interesting by the controversy over the power source of the western of the two merger nuclei and its immense luminosity and dust obscuration. Using traditional grid based methods I explore the molecular gas conditions within the nuclei and find evidence for chemical differentiation between the two nuclei, potentially related to the obscured power source. Finally, I investigate the potential evolution of proto-clusters over cosmic time with sub-millimetre observations of 14 radio galaxies, unexpectedly finding
Song, Chenchen; Wang, Lee-Ping; Martínez, Todd J
2016-01-12
We present an automated code engine (ACE) that automatically generates optimized kernels for computing integrals in electronic structure theory on a given graphical processing unit (GPU) computing platform. The code generator in ACE creates multiple code variants with different memory and floating point operation trade-offs. A graph representation is created as the foundation of the code generation, which allows the code generator to be extended to various types of integrals. The code optimizer in ACE determines the optimal code variant and GPU configurations for a given GPU computing platform by scanning over all possible code candidates and then choosing the best-performing code candidate for each kernel. We apply ACE to the optimization of effective core potential integrals and gradients. It is observed that the best code candidate varies with differing angular momentum, floating point precision, and type of GPU being used, which shows that the ACE may be a powerful tool in adapting to fast evolving GPU architectures.
Massive Star Formation: Characterising Infall and Outflow in dense cores.
NASA Astrophysics Data System (ADS)
Akhter, Shaila; Cunningham, Maria; Harvey-Smith, Lisa; Jones, Paul Andrew; Purcell, Cormac; Walsh, Andrew John
2015-08-01
Massive stars are some of the most important objects in the Universe, shaping the evolution of galaxies, creating chemical elements, and hence shaping the evolution of the Universe. However, the processes by which they form, and how they shape their environment during their birth processes, are not well understood. We are using NH3 data from the "The H2O Southern Galactic Plane Survey" (HOPS) to define the positions of dense cores/clumps of gas in the southern Galactic plane that are likely to form stars. Due to its effective critical density, NH3 can detect massive star forming regions effectively compared to other tracers. We did a comparative study with different methods for finding clumps and found Fellwalker as the best. We found ~ 10% of the star forming clumps with multiple components and ~ 90% clumps with single component along the line of sight. Then, using data from the "The Millimetre Astronomy Legacy Team 90 GHz" (MALT90) survey, we search for the presence of infall and outflow associated with these cores. We will subsequently use the "3D Molecular Line Radiative Transfer Code" (MOLLIE) to constrain properties of the infall and outflow, such as velocity and mass flow. The aim of the project is to determine how common infall and outflow are in star forming cores, hence providing valuable constraints on the timescales and physical process involved in massive star formation.
Study on the Polarity Riddle of the Dense Plasma Focus
NASA Astrophysics Data System (ADS)
Jiang, Sheng; Link, Anthony; Higginson, Drew; Schmidt, Andrea
2016-10-01
The dense plasma focus (DPF) Z-pinch devices are capable of producing intense pulses of X-rays and neutrons, thus can serve as portable sources for active interrogation. DPF devices are normally operated with the inner electrode as anode. It has been found that interchanging the polarity of the electrodes can cause orders of magnitude decrease in the neutron yield1. The reason for this severe decay remains unclear. Here we use the particle-in-cell (PIC) code LSP2,3 to model a portable DPF with both polarities. The filling gas is deuterium. The simulations are run in the fluid mode for the rundown phase and are switched to kinetic to capture the anomalous resistivity and beam acceleration process during the pinch. The difference in the shape of the sheath, the voltage and current traces, and the electric and magnetic fields in the pinch region due to different polarities all have great effects on the deuteron ion spectrum, which further determines the neutron yield. A detailed comparison will be presented. Prepared by LLNL under Contract DE-AC52-07NA27344 and supported by the Laboratory Directed Research and Development Program (15-ERD-034) at LLNL.
ALEGRA-HEDP simulations of the dense plasma focus.
Flicker, Dawn G.; Kueny, Christopher S.; Rose, David V.
2009-09-01
We have carried out 2D simulations of three dense plasma focus (DPF) devices using the ALEGRA-HEDP code and validated the results against experiments. The three devices included two Mather-type machines described by Bernard et. al. and the Tallboy device currently in operation at NSTec in North Las Vegas. We present simulation results and compare to detailed plasma measurements for one Bernard device and to current and neutron yields for all three. We also describe a new ALEGRA capability to import data from particle-in-cell calculations of initial gas breakdown, which will allow the first ever simulations of DPF operation from the beginning of the voltage discharge to the pinch phase for arbitrary operating conditions and without assumptions about the early sheath structure. The next step in understanding DPF pinch physics must be three-dimensional modeling of conditions going into the pinch, and we have just launched our first 3D simulation of the best-diagnosed Bernard device.
Pausing and Backtracking in Transcription Under Dense Traffic Conditions
NASA Astrophysics Data System (ADS)
Klumpp, Stefan
2011-04-01
RNA polymerases transcribe the genetic information from DNA to RNA. They move along the DNA by stochastic single-nucleotide steps that are interrupted by pauses. Here we use a variant of driven lattice gas models or exclusion processes to study the effects of these pauses under conditions, where many RNA polymerases transcribe the same gene. We consider elemental pauses, where RNA polymerases are inactive and immobile, and backtracking pauses, during which RNA polymerases translocate backwards in a diffusive fashion. Under single-molecule conditions, backtracking can lead to complex dynamics due to a power-law distribution of the pause durations. Under conditions of dense RNA polymerase traffic, as in the highly transcribed genes coding for ribosomal RNA and transfer RNA, backtracking pauses are strongly suppressed because the trailing active RNA polymerase restricts the space available for backward translocation and ratchets the leading backtracked RNA polymerase forward. We characterize this effect quantitatively using extensive computer simulations. Furthermore, we show that such suppression of pauses may have a regulatory role and lead to highly cooperative control functions when coupled to transcription termination.
Ion Acoustic Modes in Warm Dense Matter
NASA Astrophysics Data System (ADS)
Hartley, Nicholas; Monaco, Guilio; White, Thomas; Gregori, Gianluca; Graham, Peter; Fletcher, Luke; Appel, Karen; Tschentscher, Thomas; Lee, Hae Ja; Nagler, Bob; Galtier, Eric; Granados, Eduardo; Heimann, Philip; Zastrau, Ulf; Doeppner, Tilo; Gericke, Dirk; Lepape, Sebastien; Ma, Tammy; Pak, Art; Schropp, Andreas; Glenzer, Siegfried; Hastings, Jerry
2015-06-01
We present results that, for the first time, show scattering from ion acoustic modes in warm dense matter, representing an unprecedented level of energy resolution in the study of dense plasmas. The experiment was carried out at the LCLS facility in California on an aluminum sample at 7 g/cc and 5 eV. Using an X-ray probe at 8 keV, shifted peaks at +/-150 meV were observed. Although the energy shifts from interactions with the acoustic waves agree with predicted values from DFT-MD models, a central (elastic) peak was also observed, which did not appear in modelled spectra and may be due to the finite timescale of the simulation. Data fitting with a hydrodynamic form has proved able to match the observed spectrum, and provide measurements of some thermodynamic properties of the system, which mostly agree with predicted values. Suggest for further experiments to determine the cause of the disparity are also given.
Solids flow rate measurement in dense slurries
Porges, K.G.; Doss, E.D.
1993-09-01
Accurate and rapid flow rate measurement of solids in dense slurries remains an unsolved technical problem, with important industrial applications in chemical processing plants and long-distance solids conveyance. In a hostile two-phase medium, such a measurement calls for two independent parameter determinations, both by non-intrusive means. Typically, dense slurries tend to flow in laminar, non-Newtonian mode, eliminating most conventional means that usually rely on calibration (which becomes more difficult and costly for high pressure and temperature media). These issues are reviewed, and specific solutions are recommended in this report. Detailed calculations that lead to improved measuring device designs are presented for both bulk density and average velocity measurements. Cross-correlation, chosen here for the latter task, has long been too inaccurate for practical applications. The cause and the cure of this deficiency are discussed using theory-supported modeling. Fluid Mechanics are used to develop the velocity profiles of laminar non-Newtonian flow in a rectangular duct. This geometry uniquely allows the design of highly accurate `capacitive` devices and also lends itself to gamma transmission densitometry on an absolute basis. An absolute readout, though of less accuracy, is also available from a capacitive densitometer and a pair of capacitive sensors yields signals suitable for cross-correlation velocity measurement.
Symmetry energy in cold dense matter
NASA Astrophysics Data System (ADS)
Jeong, Kie Sang; Lee, Su Houng
2016-01-01
We calculate the symmetry energy in cold dense matter both in the normal quark phase and in the 2-color superconductor (2SC) phase. For the normal phase, the thermodynamic potential is calculated by using hard dense loop (HDL) resummation to leading order, where the dominant contribution comes from the longitudinal gluon rest mass. The effect of gluonic interaction on the symmetry energy, obtained from the thermodynamic potential, was found to be small. In the 2SC phase, the non-perturbative BCS paring gives enhanced symmetry energy as the gapped states are forced to be in the common Fermi sea reducing the number of available quarks that can contribute to the asymmetry. We used high density effective field theory to estimate the contribution of gluon interaction to the symmetry energy. Among the gluon rest masses in 2SC phase, only the Meissner mass has iso-spin dependence although the magnitude is much smaller than the Debye mass. As the iso-spin dependence of gluon rest masses is even smaller than the case in the normal phase, we expect that the contribution of gluonic interaction to the symmetry energy in the 2SC phase will be minimal. The different value of symmetry energy in each phase will lead to different prediction for the particle yields in heavy ion collision experiment.
Compton scattering measurements from dense plasmas
Glenzer, S. H.; Neumayer, P.; Doppner, T.; ...
2008-06-12
Here, Compton scattering techniques have been developed for accurate measurements of densities and temperatures in dense plasmas. One future challenge is the application of this technique to characterize compressed matter on the National Ignition Facility where hydrogen and beryllium will approach extremely dense states of matter of up to 1000 g/cc. In this regime, the density, compressibility, and capsule fuel adiabat may be directly measured from the Compton scattered spectrum of a high-energy x-ray line source. Specifically, the scattered spectra directly reflect the electron velocity distribution. In non-degenerate plasmas, the width provides an accurate measure of the electron temperatures, whilemore » in partially Fermi degenerate systems that occur in laser-compressed matter it provides the Fermi energy and hence the electron density. Both of these regimes have been accessed in experiments at the Omega laser by employing isochorically heated solid-density beryllium and moderately compressed beryllium foil targets. In the latter experiment, compressions by a factor of 3 at pressures of 40 Mbar have been measured in excellent agreement with radiation hydrodynamic modeling.« less
Super-resolution without dense flow.
Su, Heng; Wu, Ying; Zhou, Jie
2012-04-01
Super-resolution is a widely applied technique that improves the resolution of input images by software methods. Most conventional reconstruction-based super-resolution algorithms assume accurate dense optical flow fields between the input frames, and their performance degrades rapidly when the motion estimation result is not accurate enough. However, optical flow estimation is usually difficult, particularly when complicated motion is presented in real-world videos. In this paper, we explore a new way to solve this problem by using sparse feature point correspondences between the input images. The feature point correspondences, which are obtained by matching a set of feature points, are usually precise and much more robust than dense optical flow fields. This is because the feature points represent well-selected significant locations in the image, and performing matching on the feature point set is usually very accurate. In order to utilize the sparse correspondences in conventional super-resolution, we extract an adaptive support region with a reliable local flow field from each corresponding feature point pair. The normalized prior is also proposed to increase the visual consistency of the reconstructed result. Extensive experiments on real data were carried out, and results show that the proposed algorithm produces high-resolution images with better quality, particularly in the presence of large-scale or complicated motion fields.
Dynamics of Kr in dense clathrate hydrates.
Klug, D. D.; Tse, J. S.; Zhao, J. Y.; Sturhahn, W.; Alp, E. E.; Tulk, C. A.
2011-01-01
The dynamics of Kr atoms as guests in dense clathrate hydrate structures are investigated using site specific {sup 83}Kr nuclear resonant inelastic x-ray scattering (NRIXS) spectroscopy in combination with molecular dynamics simulations. The dense structure H hydrate and filled-ice structures are studied at high pressures in a diamond anvil high-pressure cell. The dynamics of Kr in the structure H clathrate hydrate quench recovered at 77 K is also investigated. The Kr phonon density of states obtained from the experimental NRIXS data are compared with molecular dynamics simulations. The temperature and pressure dependence of the phonon spectra provide details of the Kr dynamics in the clathrate hydrate cages. Comparison with the dynamics of Kr atoms in the low-pressure structure II obtained previously was made. The Lamb-Mossbauer factor obtained from NRIXS experiments and molecular dynamics calculations are in excellent agreement and are shown to yield unique information on the strength and temperature dependence of guest-host interactions.
Quantum molecular dynamics simulations of dense matter
Collins, L.; Kress, J.; Troullier, N.; Lenosky, T.; Kwon, I.
1997-12-31
The authors have developed a quantum molecular dynamics (QMD) simulation method for investigating the properties of dense matter in a variety of environments. The technique treats a periodically-replicated reference cell containing N atoms in which the nuclei move according to the classical equations-of-motion. The interatomic forces are generated from the quantum mechanical interactions of the (between?) electrons and nuclei. To generate these forces, the authors employ several methods of varying sophistication from the tight-binding (TB) to elaborate density functional (DF) schemes. In the latter case, lengthy simulations on the order of 200 atoms are routinely performed, while for the TB, which requires no self-consistency, upwards to 1000 atoms are systematically treated. The QMD method has been applied to a variety cases: (1) fluid/plasma Hydrogen from liquid density to 20 times volume-compressed for temperatures of a thousand to a million degrees Kelvin; (2) isotopic hydrogenic mixtures, (3) liquid metals (Li, Na, K); (4) impurities such as Argon in dense hydrogen plasmas; and (5) metal/insulator transitions in rare gas systems (Ar,Kr) under high compressions. The advent of parallel versions of the methods, especially for fast eigensolvers, presage LDA simulations in the range of 500--1000 atoms and TB runs for tens of thousands of particles. This leap should allow treatment of shock chemistry as well as large-scale mixtures of species in highly transient environments.
Nuclear quantum dynamics in dense hydrogen
Kang, Dongdong; Sun, Huayang; Dai, Jiayu; Chen, Wenbo; Zhao, Zengxiu; Hou, Yong; Zeng, Jiaolong; Yuan, Jianmin
2014-01-01
Nuclear dynamics in dense hydrogen, which is determined by the key physics of large-angle scattering or many-body collisions between particles, is crucial for the dynamics of planet's evolution and hydrodynamical processes in inertial confinement confusion. Here, using improved ab initio path-integral molecular dynamics simulations, we investigated the nuclear quantum dynamics regarding transport behaviors of dense hydrogen up to the temperatures of 1 eV. With the inclusion of nuclear quantum effects (NQEs), the ionic diffusions are largely higher than the classical treatment by the magnitude from 20% to 146% as the temperature is decreased from 1 eV to 0.3 eV at 10 g/cm3, meanwhile, electrical and thermal conductivities are significantly lowered. In particular, the ionic diffusion is found much larger than that without NQEs even when both the ionic distributions are the same at 1 eV. The significant quantum delocalization of ions introduces remarkably different scattering cross section between protons compared with classical particle treatments, which explains the large difference of transport properties induced by NQEs. The Stokes-Einstein relation, Wiedemann-Franz law, and isotope effects are re-examined, showing different behaviors in nuclear quantum dynamics. PMID:24968754
Probing the Physical Structures of Dense Filaments
NASA Astrophysics Data System (ADS)
Li, Di
2015-08-01
Filament is a common feature in cosmological structures of various scales, ranging from dark matter cosmic web, galaxy clusters, inter-galactic gas flows, to Galactic ISM clouds. Even within cold dense molecular cores, filaments have been detected. Theories and simulations with (or without) different combination of physical principles, including gravity, thermal balance, turbulence, and magnetic field, can reproduce intriguing images of filaments. The ubiquity of filaments and the similarity in simulated ones make physical parameters, beyond dust column density, a necessity for understanding filament evolution. I report three projects attempting to measure physical parameters of filaments. We derive the volume density of a dense Taurus filament based on several cyanoacetylene transitions observed by GBT and ART. We measure the gas temperature of the OMC 2-3 filament based on combined GBT+VLA ammonia images. We also measured the sub-millimeter polarization vectors along OMC3. These filaments were found to be likely a cylinder-type structure, without dynamic heating, and likely accreting mass along the magnetic field lines.
Charge exchange between two nearest neighbour ions immersed in a dense plasma
NASA Astrophysics Data System (ADS)
Sauvan, P.; Angelo, P.; Derfoul, H.; Leboucher-Dalimier, E.; Devdariani, A.; Calisti, A.; Talin, B.
1999-04-01
In dense plasmas the quasimolecular model is relevant to describe the radiative properties: two nearest neighbor ions remain close to each other during a time scale of the order of the emission time. Within the frame of a quasistatic approach it has been shown that hydrogen-like spectral line shapes can exhibit satellite-like features. In this work we present the effect on the line shapes of the dynamical collision between the two ions exchanging transiently their bound electron. This model is suitable for the description of the core, the wings and the red satellite-like features. It is post-processed to the self consistent code (IDEFIX) giving the adiabatic transition energies and the oscillator strengths for the transient molecule immersed in a dense free electron bath. It is shown that the positions of the satellites are insensitive to the dynamics of the ion-ion collision. Results for fluorine Lyβ are presented.
Efficient entropy coding for scalable video coding
NASA Astrophysics Data System (ADS)
Choi, Woong Il; Yang, Jungyoup; Jeon, Byeungwoo
2005-10-01
The standardization for the scalable extension of H.264 has called for additional functionality based on H.264 standard to support the combined spatio-temporal and SNR scalability. For the entropy coding of H.264 scalable extension, Context-based Adaptive Binary Arithmetic Coding (CABAC) scheme is considered so far. In this paper, we present a new context modeling scheme by using inter layer correlation between the syntax elements. As a result, it improves coding efficiency of entropy coding in H.264 scalable extension. In simulation results of applying the proposed scheme to encoding the syntax element mb_type, it is shown that improvement in coding efficiency of the proposed method is up to 16% in terms of bit saving due to estimation of more adequate probability model.
Denis Rldzal, Drew Kouri
2014-05-13
ROL provides interfaces to and implementations of algorithms for gradient-based unconstrained and constrained optimization. ROL can be used to optimize the response of any client simulation code that evaluates scalar-valued response functions. If the client code can provide gradient information for the response function, ROL will take advantage of it, resulting in faster runtimes. ROL's interfaces are matrix-free, in other words ROL only uses evaluations of scalar-valued and vector-valued functions. ROL can be used to solve optimal design problems and inverse problems based on a variety of simulation software.
Some mathematical refinements concerning error minimization in the genetic code.
Buhrman, Harry; van der Gulik, Peter T S; Kelk, Steven M; Koolen, Wouter M; Stougie, Leen
2011-01-01
The genetic code is known to have a high level of error robustness and has been shown to be very error robust compared to randomly selected codes, but to be significantly less error robust than a certain code found by a heuristic algorithm. We formulate this optimization problem as a Quadratic Assignment Problem and use this to formally verify that the code found by the heuristic algorithm is the global optimum. We also argue that it is strongly misleading to compare the genetic code only with codes sampled from the fixed block model, because the real code space is orders of magnitude larger. We thus enlarge the space from which random codes can be sampled from approximately 2.433 × 10(18) codes to approximately 5.908 × 10(45) codes. We do this by leaving the fixed block model, and using the wobble rules to formulate the characteristics acceptable for a genetic code. By relaxing more constraints, three larger spaces are also constructed. Using a modified error function, the genetic code is found to be more error robust compared to a background of randomly generated codes with increasing space size. We point out that these results do not necessarily imply that the code was optimized during evolution for error minimization, but that other mechanisms could be the reason for this error robustness.
Irregular Repeat-Accumulate Codes for Volume Holographic Memory Systems
NASA Astrophysics Data System (ADS)
Pishro-Nik, Hossein; Fekri, Faramarz
2004-09-01
We investigate the application of irregular repeat-accumulate (IRA) codes in volume holographic memory (VHM) systems. We introduce methodologies to design efficient IRA codes. We show that a judiciously designed IRA code for a typical VHM can be as good as the optimized irregular low-density-parity-check codes while having the additional advantage of lower encoding complexity. Moreover, we present a method to reduce the error-floor effect of the IRA codes in the VHM systems. This method explores the structure of the noise pattern in holographic memories. Finally, we explain why IRA codes are good candidates for the VHM systems.
On a Mathematical Theory of Coded Exposure
2014-08-01
Coded exposure, computational photography , flutter shutter, motion blur, mean square error (MSE), signal to noise ratio (SNR). 1 Introduction Since the...photon emission µ doubles then the SNR is multiplied by a factor ? 2. (And we retrieve the fundamental theorem of photography .) Note that if we have no...deduce that the SNR evolves proportionally to ? µ and we retrieve the fundamental theorem of photography . We now turn to the optimization of the coded
Dense Plasma Heating and Radiation Generation.
1979-01-02
34 Computor Investi- ations of Laser-Plasma Interactions", to be submitted to the 1979 IEEE International Conference on Plasma Science, Montreal, Canada...Identify by block number) carbon dioxide laser, beat heating, computor code, laser-plasma interactions. A pulsed power 20,ABSTRACT (Cntinue on reverse side
NASA Astrophysics Data System (ADS)
Bari, Md. S.; Das, T.
2013-09-01
Tectonic framework of Bangladesh and adjoining areas indicate that Bangladesh lies well within an active seismic zone. The after effect of earthquake is more severe in an underdeveloped and a densely populated country like ours than any other developed countries. Bangladesh National Building Code (BNBC) was first established in 1993 to provide guidelines for design and construction of new structure subject to earthquake ground motions in order to minimize the risk to life for all structures. A revision of BNBC 1993 is undergoing to make this up to date with other international building codes. This paper aims at the comparison of various provisions of seismic analysis as given in building codes of different countries. This comparison will give an idea regarding where our country stands when it comes to safety against earth quake. Primarily, various seismic parameters in BNBC 2010 (draft) have been studied and compared with that of BNBC 1993. Later, both 1993 and 2010 edition of BNBC codes have been compared graphically with building codes of other countries such as National Building Code of India 2005 (NBC-India 2005), American Society of Civil Engineering 7-05 (ASCE 7-05). The base shear/weight ratios have been plotted against the height of the building. The investigation in this paper reveals that BNBC 1993 has the least base shear among all the codes. Factored Base shear values of BNBC 2010 are found to have increased significantly than that of BNBC 1993 for low rise buildings (≤20 m) around the country than its predecessor. Despite revision of the code, BNBC 2010 (draft) still suggests less base shear values when compared to the Indian and American code. Therefore, this increase in factor of safety against the earthquake imposed by the proposed BNBC 2010 code by suggesting higher values of base shear is appreciable.
2011-01-01
proteins are likely associated with the target phenotype. The DENSE code can be downloaded from http://www.freescience.org/cs/DENSE/ PMID:22024446
Implementation and Re nement of a Comprehensive Model for Dense Granular Flows
Sundaresan, Sankaran
2015-09-30
Dense granular ows are ubiquitous in both natural and industrial processes. They manifest three di erent ow regimes, each exhibiting its own dependence on solids volume fraction, shear rate, and particle-level properties. This research project sought to develop continuum rheological models for dense granular ows that bridges multiple regimes of ow, implement them in open-source platforms for gas-particle ows and perform test simulations. The rst phase of the research covered in this project involved implementation of a steady- shear rheological model that bridges quasi-static, intermediate and inertial regimes of ow into MFIX (Multiphase Flow with Interphase eXchanges - a general purpose computer code developed at the National Energy Technology Laboratory). MFIX simulations of dense granular ows in hourglass-shaped hopper were then performed as test examples. The second phase focused on formulation of a modi ed kinetic theory for frictional particles that can be used over a wider range of particle volume fractions and also apply for dynamic, multi- dimensional ow conditions. To guide this work, simulations of simple shear ows of identical mono-disperse spheres were also performed using the discrete element method. The third phase of this project sought to develop and implement a more rigorous treatment of boundary e ects. Towards this end, simulations of simple shear ows of identical mono-disperse spheres con ned between parallel plates were performed and analyzed to formulate compact wall boundary conditions that can be used for dense frictional ows at at frictional boundaries. The fourth phase explored the role of modest levels of cohesive interactions between particles on the dense phase rheology. The nal phase of this project focused on implementation and testing of the modi ed kinetic theory in MFIX and running bin-discharge simulations as test examples.
ERIC Educational Resources Information Center
McCabe, Donald; Trevino, Linda Klebe
2002-01-01
Explores the rise in student cheating and evidence that students cheat less often at schools with an honor code. Discusses effective use of such codes and creation of a peer culture that condemns dishonesty. (EV)
Cellulases and coding sequences
Li, Xin-Liang; Ljungdahl, Lars G.; Chen, Huizhong
2001-02-20
The present invention provides three fungal cellulases, their coding sequences, recombinant DNA molecules comprising the cellulase coding sequences, recombinant host cells and methods for producing same. The present cellulases are from Orpinomyces PC-2.
Cellulases and coding sequences
Li, Xin-Liang; Ljungdahl, Lars G.; Chen, Huizhong
2001-01-01
The present invention provides three fungal cellulases, their coding sequences, recombinant DNA molecules comprising the cellulase coding sequences, recombinant host cells and methods for producing same. The present cellulases are from Orpinomyces PC-2.
ERIC Educational Resources Information Center
Shumack, Kellie A.; Reilly, Erin; Chamberlain, Nik
2013-01-01
space, has error-correction capacity, and can be read from any direction. These codes are used in manufacturing, shipping, and marketing, as well as in education. QR codes can be created to produce…
Coded continuous wave meteor radar
NASA Astrophysics Data System (ADS)
Vierinen, Juha; Chau, Jorge L.; Pfeffer, Nico; Clahsen, Matthias; Stober, Gunter
2016-03-01
The concept of a coded continuous wave specular meteor radar (SMR) is described. The radar uses a continuously transmitted pseudorandom phase-modulated waveform, which has several advantages compared to conventional pulsed SMRs. The coding avoids range and Doppler aliasing, which are in some cases problematic with pulsed radars. Continuous transmissions maximize pulse compression gain, allowing operation at lower peak power than a pulsed system. With continuous coding, the temporal and spectral resolution are not dependent on the transmit waveform and they can be fairly flexibly changed after performing a measurement. The low signal-to-noise ratio before pulse compression, combined with independent pseudorandom transmit waveforms, allows multiple geographically separated transmitters to be used in the same frequency band simultaneously without significantly interfering with each other. Because the same frequency band can be used by multiple transmitters, the same interferometric receiver antennas can be used to receive multiple transmitters at the same time. The principles of the signal processing are discussed, in addition to discussion of several practical ways to increase computation speed, and how to optimally detect meteor echoes. Measurements from a campaign performed with a coded continuous wave SMR are shown and compared with two standard pulsed SMR measurements. The type of meteor radar described in this paper would be suited for use in a large-scale multi-static network of meteor radar transmitters and receivers. Such a system would be useful for increasing the number of meteor detections to obtain improved meteor radar data products.
Small Satellites Embedded in Dense Planetary Rings
NASA Astrophysics Data System (ADS)
Hahn, J. M.
2005-08-01
A small satellite that inhabits a narrow gap in an dense planetary ring, such as Pan, will excite wakes at the gap edges, as well as spiral waves deeper in the ring. As the satellite disturbs the ring, it also draws angular momentum from the ring matter that orbits just interior to the satellite, while depositing that angular momentum among the ring particles that orbit just exterior. This outward transport of angular momentum causes the orbits of the nearby ring particles to slowly shrink, dragging along with them the satellite in its gap. This inward motion is of course type II migration that is familiar from planet formation theory. The significance of type II migration, if any, will also be assessed for the small satellites that orbit within Saturn's rings.
Nonlinear extraordinary wave in dense plasma
Krasovitskiy, V. B.; Turikov, V. A.
2013-10-15
Conditions for the propagation of a slow extraordinary wave in dense magnetized plasma are found. A solution to the set of relativistic hydrodynamic equations and Maxwell’s equations under the plasma resonance conditions, when the phase velocity of the nonlinear wave is equal to the speed of light, is obtained. The deviation of the wave frequency from the resonance frequency is accompanied by nonlinear longitudinal-transverse oscillations. It is shown that, in this case, the solution to the set of self-consistent equations obtained by averaging the initial equations over the period of high-frequency oscillations has the form of an envelope soliton. The possibility of excitation of a nonlinear wave in plasma by an external electromagnetic pulse is confirmed by numerical simulations.
Dynamic structure of dense krypton gas
NASA Astrophysics Data System (ADS)
Egelstaff, P. A.; Salacuse, J. J.; Schommers, W.; Ram, J.
1984-07-01
We have made molecular-dynamics computer simulations of dense krypton gas (10.6×1027 atoms/m3 and 296 K) using reasonably realistic pair potentials. Comparisons are made with the recent experimental data
Dense annular flows of granular media
NASA Astrophysics Data System (ADS)
de Ryck, Alain; Louisnard, Olivier
2013-06-01
Dense granular flows constitute an important topic for geophysics and process engineering. To describe them, a rheology based on the coaxiality between the stress and strain tensors with a Mohr-Coulomb yield criterion has been proposed. We propose here an analytic study of flows in an annular cell, with this rheology. This geometry is relevant for a series of powder rheometers or mixing devices, but the discussion is focused on the split-bottom geometry, for which the internal flow has been investigated by NMR technique. In this case, the full resolution of the velocity and stress fields allow to localize the shear deformations. The theoretical results obtained for the latter are compared with the torque measurements by Dijksman et al. [Phys. Rev. E, 82 (2010) 060301].
The Theory of Dense Core Collapse
NASA Astrophysics Data System (ADS)
Li, Zhi-Yun
2014-07-01
I will review the theory of dense core collapse, with an emphasis on disk formation. Disk formation, once thought to be a simple consequence of the conservation of angular momentum during hydrodynamic core collapse, is far more subtle in magnetized gas. In this case, rotation can be strongly magnetically braked. Indeed, both analytic arguments and numerical simulations have shown that disk formation is suppressed in ideal MHD at the observed level of core magnetization. I will discuss the physical reason for this so-called “magnetic braking catastrophe,” and review possible resolutions to the problem that have been proposed so far, including non-ideal MHD effects, misalignment between the magnetic field and rotation axis, and turbulence. Other aspects of core collapse, such as fragmentation and outflow generation, will also be discussed.
Carbon nitride frameworks and dense crystalline polymorphs
NASA Astrophysics Data System (ADS)
Pickard, Chris J.; Salamat, Ashkan; Bojdys, Michael J.; Needs, Richard J.; McMillan, Paul F.
2016-09-01
We used ab initio random structure searching (AIRSS) to investigate polymorphism in C3N4 carbon nitride as a function of pressure. Our calculations reveal new framework structures, including a particularly stable chiral polymorph of space group P 43212 containing mixed s p2 and s p3 bonding, that we have produced experimentally and recovered to ambient conditions. As pressure is increased a sequence of structures with fully s p3 -bonded C atoms and three-fold-coordinated N atoms is predicted, culminating in a dense P n m a phase above 250 GPa. Beyond 650 GPa we find that C3N4 becomes unstable to decomposition into diamond and pyrite-structured CN2.
Binary Black Holes from Dense Star Clusters
NASA Astrophysics Data System (ADS)
Rodriguez, Carl
2017-01-01
The recent detections of gravitational waves from merging binary black holes have the potential to revolutionize our understanding of compact object astrophysics. But to fully utilize this new window into the universe, we must compare these observations to detailed models of binary black hole formation throughout cosmic time. In this talk, I will review our current understanding of cluster dynamics, describing how binary black holes can be formed through gravitational interactions in dense stellar environments, such as globular clusters and galactic nuclei. I will review the properties and merger rates of binary black holes from the dynamical formation channel. Finally, I will describe how the spins of a binary black hole are determined by its formation history, and how we can use this to discriminate between dynamically-formed binaries and those formed from isolated evolution in galactic fields.
Kaon condensation in dense stellar matter
Lee, Chang-Hwan; Rho, M. |
1995-03-01
This article combines two talks given by the authors and is based on Works done in collaboration with G.E. Brown and D.P. Min on kaon condensation in dense baryonic medium treated in chiral perturbation theory using heavy-baryon formalism. It contains, in addition to what was recently published, astrophysical backgrounds for kaon condensation discussed by Brown and Bethe, a discussion on a renormalization-group analysis to meson condensation worked out together with H.K. Lee and S.J. Sin, and the recent results of K.M. Westerberg in the bound-state approach to the Skyrme model. Negatively charged kaons are predicted to condense at a critical density 2 {approx_lt} {rho}/{rho}o {approx_lt} 4, in the range to allow the intriguing new phenomena predicted by Brown and Bethe to take place in compact star matter.
Performance Evaluation of Dense Gas Dispersion Models.
NASA Astrophysics Data System (ADS)
Touma, Jawad S.; Cox, William M.; Thistle, Harold; Zapert, James G.
1995-03-01
This paper summarizes the results of a study to evaluate the performance of seven dense gas dispersion models using data from three field experiments. Two models (DEGADIS and SLAB) are in the public domain and the other five (AIRTOX, CHARM, FOCUS, SAFEMODE, and TRACE) are proprietary. The field data used are the Desert Tortoise pressurized ammonia releases, Burro liquefied natural gas spill tests, and the Goldfish anhydrous hydrofluoric acid spill experiments. Desert Tortoise and Goldfish releases were simulated as horizontal jet releases, and Burro as a liquid pool. Performance statistics were used to compare maximum observed concentrations and plume half-width to those predicted by each model. Model performance varied and no model exhibited consistently good performance across all three databases. However, when combined across the three databases, all models performed within a factor of 2. Problems encountered are discussed in order to help future investigators.
Plasmon resonance in warm dense matter.
Thiele, R; Bornath, T; Fortmann, C; Höll, A; Redmer, R; Reinholz, H; Röpke, G; Wierling, A; Glenzer, S H; Gregori, G
2008-08-01
Collective Thomson scattering with extreme ultraviolet light or x rays is shown to allow for a robust measurement of the free electron density in dense plasmas. Collective excitations like plasmons appear as maxima in the scattering signal. Their frequency position can directly be related to the free electron density. The range of applicability of the standard Gross-Bohm dispersion relation and of an improved dispersion relation in comparison to calculations based on the dielectric function in random phase approximation is investigated. More important, this well-established treatment of Thomson scattering on free electrons is generalized in the Born-Mermin approximation by including collisions. We show that, in the transition region from collective to noncollective scattering, the consideration of collisions is important.
Laser plasma diagnostics of dense plasmas
Glendinning, S.G.; Amendt, P.; Budil, K.S.; Hammel, B.A.; Kalantar, D.H.; Key, M.H.; Landen, O.L.; Remington, B.A.; Desenne, D.E.
1995-07-12
The authors describe several experiments on Nova that use laser-produced plasmas to generate x-rays capable of backlighting dense, cold plasmas (p {approximately} 1--3 gm/cm{sup 3}, kT {approximately} 5--10 eV, and areal density {rho}{ell}{approximately} 0.01--0.05 g/cm{sup 2}). The x-rays used vary over a wide range of h{nu}, from 80 eV (X-ray laser) to 9 keV. This allows probing of plasmas relevant to many hydrodynamic experiments. Typical diagnostics are 100 ps pinhole framing cameras for a long pulse backlighter and a time-integrated CCD camera for a short pulse backlighter.
Towards a theoretical description of dense QCD
NASA Astrophysics Data System (ADS)
Philipsen, Owe
2017-03-01
The properties of matter at finite baryon densities play an important role for the astrophysics of compact stars as well as for heavy ion collisions or the description of nuclear matter. Because of the sign problem of the quark determinant, lattice QCD cannot be simulated by standard Monte Carlo at finite baryon densities. I review alternative attempts to treat dense QCD with an effective lattice theory derived by analytic strong coupling and hopping expansions, which close to the continuum is valid for heavy quarks only, but shows all qualitative features of nuclear physics emerging from QCD. In particular, the nuclear liquid gas transition and an equation of state for baryons can be calculated directly from QCD. A second effective theory based on strong coupling methods permits studies of the phase diagram in the chiral limit on coarse lattices.
Possible test of ancient dense Martian atmosphere
NASA Technical Reports Server (NTRS)
Hartmann, W. K.; Engel, S.
1993-01-01
We have completed preliminary calculations of the minimum sizes of bolides that would penetrate various hypothetical Martian atmospheres with surface pressures ranging from 6 to 1000 mbar for projectiles of various strengths. The calculations are based on a computer program. These numbers are used to estimate the diameter corresponding to the turndown in the crater diameter distribution due to the loss of these bodies, analogous to the dramatic turndown at larger sized already discovered on Venus due to this effect. We conclude that for an atmosphere greater than a few hundred millibars, a unique downward displacement in the diameter distribution would develop in the crater diameter distribution at D approximately = 0.5-4 km, due to loss of all but Fe bolides. Careful search for this displacement globally, as outlined here, would allow us to place upper limits on the pressure of the atmosphere contemporaneous with the oldest surfaces, and possibly to get direct confirmation of dense ancient atmospheres.
Nonplanar electrostatic shock waves in dense plasmas
Masood, W.; Rizvi, H.
2010-02-15
Two-dimensional quantum ion acoustic shock waves (QIASWs) are studied in an unmagnetized plasma consisting of electrons and ions. In this regard, a nonplanar quantum Kadomtsev-Petviashvili-Burgers (QKPB) equation is derived using the small amplitude perturbation expansion method. Using the tangent hyperbolic method, an analytical solution of the planar QKPB equation is obtained and subsequently used as the initial profile to numerically solve the nonplanar QKPB equation. It is observed that the increasing number density (and correspondingly the quantum Bohm potential) and kinematic viscosity affect the propagation characteristics of the QIASW. The temporal evolution of the nonplanar QIASW is investigated both in Cartesian and polar planes and the results are discussed from the numerical stand point. The results of the present study may be applicable in the study of propagation of small amplitude localized electrostatic shock structures in dense astrophysical environments.
Yielding behavior of dense microgel glasses
NASA Astrophysics Data System (ADS)
Joshi, R. G.; Tata, B. V. R.; Karthickeyan, D.
2013-02-01
We report here the yielding behavior of dense suspensions of stimuli-responsive poly-N-isopropyl acrylamide (PNIPAM) microgel particles studied by performing oscillatory shear measurements. At a volume fraction of φ = 0.6 (labeled as sample S1) the suspension is characterized to be repulsive glass by dynamic light scattering technique and showed one step yielding. Quite interestingly higher volume fraction sample (S2) prepared by osmotically compressing sample S1, showed yielding occurring in two steps. Such one step yielding behavior turning into two step yielding was reported by Pham et al [Europhys. Lett., 75, 624 (2006)] in hard-sphere repulsive colloidal glass when transformed into an attractive glass by inducing depletion attraction. We confirm the repulsive interparticle interaction between PNIPAM microgel particles turning into attractive upon osmotic compression by static light scattering measurements.
Prediction of viscosity of dense fluid mixtures
NASA Astrophysics Data System (ADS)
Royal, Damian D.; Vesovic, Velisa; Trusler, J. P. Martin; Wakeham, William. A.
The Vesovic-Wakeham (VW) method of predicting the viscosity of dense fluid mixtures has been improved by implementing new mixing rules based on the rigid sphere formalism. The proposed mixing rules are based on both Lebowitz's solution of the Percus-Yevick equation and on the Carnahan-Starling equation. The predictions of the modified VW method have been compared with experimental viscosity data for a number of diverse fluid mixtures: natural gas, hexane + hheptane, hexane + octane, cyclopentane + toluene, and a ternary mixture of hydrofluorocarbons (R32 + R125 + R134a). The results indicate that the proposed improvements make possible the extension of the original VW method to liquid mixtures and to mixtures containing polar species, while retaining its original accuracy.
Oxygen ion-conducting dense ceramic
Balachandran, Uthamalingam; Kleefisch, Mark S.; Kobylinski, Thaddeus P.; Morissette, Sherry L.; Pei, Shiyou
1996-01-01
Preparation, structure, and properties of mixed metal oxide compositions containing at least strontium, cobalt, iron and oxygen are described. The crystalline mixed metal oxide compositions of this invention have, for example, structure represented by Sr.sub..alpha. (Fe.sub.1-x Co.sub.x).sub..alpha.+.beta. O.sub..delta. where x is a number in a range from 0.01 to about 1, .alpha. is a number in a range from about 1 to about 4, .beta. is a number in a range upward from 0 to about 20, and .delta. is a number which renders the compound charge neutral, and wherein the composition has a non-perovskite structure. Use of the mixed metal oxides in dense ceramic membranes which exhibit oxygen ionic conductivity and selective oxygen separation, are described as well as their use in separation of oxygen from an oxygen-containing gaseous mixture.
Oxygen ion-conducting dense ceramic
Balachandran, Uthamalingam; Kleefisch, Mark S.; Kobylinski, Thaddeus P.; Morissette, Sherry L.; Pei, Shiyou
1997-01-01
Preparation, structure, and properties of mixed metal oxide compositions containing at least strontium, cobalt, iron and oxygen are described. The crystalline mixed metal oxide compositions of this invention have, for example, structure represented by Sr.sub..alpha. (Fe.sub.1-x Co.sub.x).sub..alpha.+.beta. O.sub..delta. where x is a number in a range from 0.01 to about 1, .alpha. is a number in a range from about 1 to about 4, .beta. is a number in a range upward from 0 to about 20, and .delta. is a number which renders the compound charge neutral, and wherein the composition has a non-perovskite structure. Use of the mixed metal oxides in dense ceramic membranes which exhibit oxygen ionic conductivity and selective oxygen separation, are described as well as their use in separation of oxygen from an oxygen-containing gaseous mixture.
Constitutive relations for steady, dense granular flows
NASA Astrophysics Data System (ADS)
Vescovi, D.; Berzi, D.; di Prisco, C. G.
2011-12-01
In the recent past, the flow of dense granular materials has been the subject of many scientific works; this is due to the large number of natural phenomena involving solid particles flowing at high concentration (e.g., debris flows and landslides). In contrast with the flow of dilute granular media, where the energy is essentially dissipated in binary collisions, the flow of dense granular materials is characterized by multiple, long-lasting and frictional contacts among the particles. The work focuses on the mechanical response of dry granular materials under steady, simple shear conditions. In particular, the goal is to obtain a complete rheology able to describe the material behavior within the entire range of concentrations for which the flow can be considered dense. The total stress is assumed to be the linear sum of a frictional and a kinetic component. The frictional and the kinetic contribution are modeled in the context of the critical state theory [8, 10] and the kinetic theory of dense granular gases [1, 3, 7], respectively. In the critical state theory, the granular material approaches a certain attractor state, independent on the initial arrangement, characterized by the capability of developing unlimited shear strains without any change in the concentration. Given that a disordered granular packing exists only for a range of concentration between the random loose and close packing [11], a form for the concentration dependence of the frictional normal stress that makes the latter vanish at the random loose packing is defined. In the kinetic theory, the particles are assumed to interact through instantaneous, binary and uncorrelated collisions. A new state variable of the problem is introduced, the granular temperature, which accounts for the velocity fluctuations. The model has been extended to account for the decrease in the energy dissipation due to the existence of correlated motion among the particles [5, 6] and to deal with non
Megajoule Dense Plasma Focus Solid Target Experiments
NASA Astrophysics Data System (ADS)
Podpaly, Y. A.; Falabella, S.; Link, A.; Povilus, A.; Higginson, D. P.; Shaw, B. H.; Cooper, C. M.; Chapman, S.; Bennett, N.; Sipe, N.; Olson, R.; Schmidt, A. E.
2016-10-01
Dense plasma focus (DPF) devices are plasma sources that can produce significant neutron yields from beam into gas interactions. Yield increases, up to approximately a factor of five, have been observed previously on DPFs using solid targets, such as CD2 and D2O ice. In this work, we report on deuterium solid-target experiments at the Gemini DPF. A rotatable target holder and baffle arrangement were installed in the Gemini device which allowed four targets to be deployed sequentially without breaking vacuum. Solid targets of titanium deuteride were installed and systematically studied at a variety of fill pressures, bias voltages, and target positions. Target holder design, experimental results, and comparison to simulations will be presented. Prepared by LLNL under Contract DE-AC52-07NA27344.
Improved models of dense anharmonic lattices
NASA Astrophysics Data System (ADS)
Rosenau, P.; Zilburg, A.
2017-01-01
We present two improved quasi-continuous models of dense, strictly anharmonic chains. The direct expansion which includes the leading effect due to lattice dispersion, results in a Boussinesq-type PDE with a compacton as its basic solitary mode. Without increasing its complexity we improve the model by including additional terms in the expanded interparticle potential with the resulting compacton having a milder singularity at its edges. A particular care is applied to the Hertz potential due to its non-analyticity. Since, however, the PDEs of both the basic and the improved model are ill posed, they are unsuitable for a study of chains dynamics. Using the bond length as a state variable we manipulate its dispersion and derive a well posed fourth order PDE.
Ion beam driven warm dense matter experiments
NASA Astrophysics Data System (ADS)
Bieniosek, F. M.; Ni, P. A.; Leitner, M.; Roy, P. K.; More, R.; Barnard, J. J.; Kireeff Covo, M.; Molvik, A. W.; Yoneda, H.
2007-11-01
We report plans and experimental results in ion beam-driven warm dense matter (WDM) experiments. Initial experiments at LBNL are at 0.3-1 MeV K+ beam (below the Bragg peak), increasing toward the Bragg peak in future versions of the accelerator. The WDM conditions are envisioned to be achieved by combined longitudinal and transverse neutralized drift compression to provide a hot spot on the target with a beam spot size of about 1 mm, and pulse length about 1-2 ns. The range of the beams in solid matter targets is about 1 micron, which can be lengthened by using porous targets at reduced density. Initial experiments include an experiment to study transient darkening at LBNL; and a porous target experiment at GSI heated by intense heavy-ion beams from the SIS 18 storage ring. Further experiments will explore target temperature and other properties such as electrical conductivity to investigate phase transitions and the critical point.
X-ray scattering from dense plasmas
NASA Astrophysics Data System (ADS)
McSherry, Declan Joseph
Dense plasmas were studied by probing them with kilovolt x-rays and measuring those scattered at various angles. The laser produced x-ray source emitted Ti He alpha 4.75 keV x-rays. Two different plasma types were explored. The first was created by laser driven shocks on either side of a sample foil consisting of 2 micron thickness of Al, sandwiched between two 1 micron CH layers. We have observed a peak in the x-ray scattering cross section, indicating diffraction from the plasma. However, the experimentally inferred plasma density, did not always agree broadly with the hydrodynamic simulation MEDX (A modified version of MEDUSA). The second plasma type that we studied was created by soft x-ray heating on either side of a sample foil, this time consisting of 1 micron thickness of Al, sandwiched between two 0.2 micron CH layers. Two foil targets, each consisting of a 0.1 micron thick Au foil mounted on 1 micron of CH, were placed 4 mm from the sample foil. The soft x-rays were produced by laser irradiating these two foil targets. We found that, 0.5 ns after the peak of the laser heating pulses, that the measured cross sections more closely matched those simulated using the Thomas Fermi model than the Inferno model. Later in time, at 2 ns, the plasma is approaching a weakly coupled state. This is the first time x-ray scattering cross sections have been measured from dense plasmas generated by radiatively heating both sides of the sample. Moreover, these are absolute values typically within a factor of two of expectation for early x-ray probe times.
A constitutive law for dense granular flows.
Jop, Pierre; Forterre, Yoël; Pouliquen, Olivier
2006-06-08
A continuum description of granular flows would be of considerable help in predicting natural geophysical hazards or in designing industrial processes. However, the constitutive equations for dry granular flows, which govern how the material moves under shear, are still a matter of debate. One difficulty is that grains can behave like a solid (in a sand pile), a liquid (when poured from a silo) or a gas (when strongly agitated). For the two extreme regimes, constitutive equations have been proposed based on kinetic theory for collisional rapid flows, and soil mechanics for slow plastic flows. However, the intermediate dense regime, where the granular material flows like a liquid, still lacks a unified view and has motivated many studies over the past decade. The main characteristics of granular liquids are: a yield criterion (a critical shear stress below which flow is not possible) and a complex dependence on shear rate when flowing. In this sense, granular matter shares similarities with classical visco-plastic fluids such as Bingham fluids. Here we propose a new constitutive relation for dense granular flows, inspired by this analogy and recent numerical and experimental work. We then test our three-dimensional (3D) model through experiments on granular flows on a pile between rough sidewalls, in which a complex 3D flow pattern develops. We show that, without any fitting parameter, the model gives quantitative predictions for the flow shape and velocity profiles. Our results support the idea that a simple visco-plastic approach can quantitatively capture granular flow properties, and could serve as a basic tool for modelling more complex flows in geophysical or industrial applications.
Jones, T.
1993-11-01
This paper examines the results of previous wire code research to determines the relationship with childhood cancer, wire codes and electromagnetic fields. The paper suggests that, in the original Savitz study, biases toward producing a false positive association between high wire codes and childhood cancer were created by the selection procedure.
Universal Noiseless Coding Subroutines
NASA Technical Reports Server (NTRS)
Schlutsmeyer, A. P.; Rice, R. F.
1986-01-01
Software package consists of FORTRAN subroutines that perform universal noiseless coding and decoding of integer and binary data strings. Purpose of this type of coding to achieve data compression in sense that coded data represents original data perfectly (noiselessly) while taking fewer bits to do so. Routines universal because they apply to virtually any "real-world" data source.
Mapping Local Codes to Read Codes.
Bonney, Wilfred; Galloway, James; Hall, Christopher; Ghattas, Mikhail; Tramma, Leandro; Nind, Thomas; Donnelly, Louise; Jefferson, Emily; Doney, Alexander
2017-01-01
Background & Objectives: Legacy laboratory test codes make it difficult to use clinical datasets for meaningful translational research, where populations are followed for disease risk and outcomes over many years. The Health Informatics Centre (HIC) at the University of Dundee hosts continuous biochemistry data from the clinical laboratories in Tayside and Fife dating back as far as 1987. However, the HIC-managed biochemistry dataset is coupled with incoherent sample types and unstandardised legacy local test codes, which increases the complexity of using the dataset for reasonable population health outcomes. The objective of this study was to map the legacy local test codes to the Scottish 5-byte Version 2 Read Codes using biochemistry data extracted from the repository of the Scottish Care Information (SCI) Store.
The Effects of Stellar Dynamics on the Evolution of Young, Dense Stellar Systems
NASA Astrophysics Data System (ADS)
Belkus, H.; van Bever, J.; Vanbeveren, D.
In this paper, we report on first results of a project in Brussels in which we study the effects of stellar dynamics on the evolution of young dense stellar systems using 3 decades of expertise in massive-star evolution and our population (number and spectral) synthesis code. We highlight an unconventionally formed object scenario (UFO-scenario) for Wolf Rayet binaries and study the effects of a luminous blue variable-type instability wind mass-loss formalism on the formation of intermediate-mass black holes.
Visualizing expanding warm dense matter heated by laser-generated ion beams
Bang, Woosuk
2015-08-24
This PowerPoint presentation concluded with the following. We calculated the expected heating per atom and temperatures of various target materials using a Monte Carlo simulation code and SESAME EOS tables. We used aluminum ion beams to heat gold and diamond uniformly and isochorically. A streak camera imaged the expansion of warm dense gold (5.5 eV) and diamond (1.7 eV). GXI-X recorded all 16 x-ray images of the unheated gold bar targets proving that it could image the motion of the gold/diamond interface of the proposed target.
Stark broadening of isolated lines from high-Z emitters in dense plasmas
Weisheit, J.C.; Pollock, E.L.
1980-09-01
The joint distribution of the electric microfield and its longitudinal derivative is required for the calculation of line profiles for the He-like ions in very dense plasmas. We used a molecular dynamics code to compute exact distributions in single- and multi-component plasmas, and then we investigated various analytical approximations to these results. We found that a simplified, two-nearest-neighbor scheme leads to surprisingly accurate distribution functions. Our results are illustrated by sample profiles for Ne/sup +8/ and Ar/sup +16/ resonance lines.
Dense Deposit Disease Mimicking a Renal Small Vessel Vasculitis.
Singh, Lavleen; Singh, Geetika; Bhardwaj, Swati; Sinha, Aditi; Bagga, Arvind; Dinda, Amit
2016-01-01
Dense deposit disease is caused by fluid-phase dysregulation of the alternative complement pathway and frequently deviates from the classic membranoproliferative pattern of injury on light microscopy. Other patterns of injury described for dense deposit disease include mesangioproliferative, acute proliferative/exudative, and crescentic GN. Regardless of the histologic pattern, C3 glomerulopathy, which includes dense deposit disease and C3 GN, is defined by immunofluorescence intensity of C3c two or more orders of magnitude greater than any other immune reactant (on a 0-3 scale). Ultrastructural appearances distinguish dense deposit disease and C3 GN. Focal and segmental necrotizing glomerular lesions with crescents, mimicking a small vessel vasculitis such as ANCA-associated GN, are a very rare manifestation of dense deposit disease. We describe our experience with this unusual histologic presentation and distinct clinical course of dense deposit disease, discuss the pitfalls in diagnosis, examine differential diagnoses, and review the relevant literature.
Dense Deposit Disease Mimicking a Renal Small Vessel Vasculitis
Singh, Lavleen; Bhardwaj, Swati; Sinha, Aditi; Bagga, Arvind; Dinda, Amit
2016-01-01
Dense deposit disease is caused by fluid-phase dysregulation of the alternative complement pathway and frequently deviates from the classic membranoproliferative pattern of injury on light microscopy. Other patterns of injury described for dense deposit disease include mesangioproliferative, acute proliferative/exudative, and crescentic GN. Regardless of the histologic pattern, C3 glomerulopathy, which includes dense deposit disease and C3 GN, is defined by immunofluorescence intensity of C3c two or more orders of magnitude greater than any other immune reactant (on a 0–3 scale). Ultrastructural appearances distinguish dense deposit disease and C3 GN. Focal and segmental necrotizing glomerular lesions with crescents, mimicking a small vessel vasculitis such as ANCA-associated GN, are a very rare manifestation of dense deposit disease. We describe our experience with this unusual histologic presentation and distinct clinical course of dense deposit disease, discuss the pitfalls in diagnosis, examine differential diagnoses, and review the relevant literature. PMID:26361799
Software Certification - Coding, Code, and Coders
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Holzmann, Gerard J.
2011-01-01
We describe a certification approach for software development that has been adopted at our organization. JPL develops robotic spacecraft for the exploration of the solar system. The flight software that controls these spacecraft is considered to be mission critical. We argue that the goal of a software certification process cannot be the development of "perfect" software, i.e., software that can be formally proven to be correct under all imaginable and unimaginable circumstances. More realistically, the goal is to guarantee a software development process that is conducted by knowledgeable engineers, who follow generally accepted procedures to control known risks, while meeting agreed upon standards of workmanship. We target three specific issues that must be addressed in such a certification procedure: the coding process, the code that is developed, and the skills of the coders. The coding process is driven by standards (e.g., a coding standard) and tools. The code is mechanically checked against the standard with the help of state-of-the-art static source code analyzers. The coders, finally, are certified in on-site training courses that include formal exams.
Villalobos, Luis Francisco; Karunakaran, Madhavan; Peinemann, Klaus-Viktor
2015-05-13
We present the development of a facile phase-inversion method for forming asymmetric membranes with a precise high metal ion loading capacity in only the dense layer. The approach combines the use of macromolecule-metal intermolecular complexes to form the dense layer of asymmetric membranes with nonsolvent-induced phase separation to form the porous support. This allows the independent optimization of both the dense layer and porous support while maintaining the simplicity of a phase-inversion process. Moreover, it facilitates control over (i) the thickness of the dense layer throughout several orders of magnitude from less than 15 nm to more than 6 μm, (ii) the type and amount of metal ions loaded in the dense layer, (iii) the morphology of the membrane surface, and (iv) the porosity and structure of the support. This simple and scalable process provides a new platform for building multifunctional membranes with a high loading of well-dispersed metal ions in the dense layer.
Quantum error-correcting codes over mixed alphabets
NASA Astrophysics Data System (ADS)
Wang, Zhuo; Yu, Sixia; Fan, Heng; Oh, C. H.
2013-08-01
We study the quantum error-correcting codes over mixed alphabets to deal with a more complicated and practical situation in which the physical systems for encoding may have different numbers of energy levels. In particular we investigate their constructions and propose the theory of quantum Singleton bound. Two kinds of code constructions are presented: a projection-based construction for general case and a graphical construction based on a graph-theoretical object composite coding clique dealing with the case of reducible alphabets. We find out some optimal one-error correcting or detecting codes over two alphabets. Our method of composite coding clique also sheds light on constructing standard quantum error-correcting codes, and other families of optimal codes are found.
Rank minimization code aperture design for spectrally selective compressive imaging.
Arguello, Henry; Arce, Gonzalo R
2013-03-01
A new code aperture design framework for multiframe code aperture snapshot spectral imaging (CASSI) system is presented. It aims at the optimization of code aperture sets such that a group of compressive spectral measurements is constructed, each with information from a specific subset of bands. A matrix representation of CASSI is introduced that permits the optimization of spectrally selective code aperture sets. Furthermore, each code aperture set forms a matrix such that rank minimization is used to reduce the number of CASSI shots needed. Conditions for the code apertures are identified such that a restricted isometry property in the CASSI compressive measurements is satisfied with higher probability. Simulations show higher quality of spectral image reconstruction than that attained by systems using Hadamard or random code aperture sets.
NASA Astrophysics Data System (ADS)
Piron, R.; Blenski, T.
2011-12-01
The Variational Average-Atom in Quantum Plasmas (VAAQP) code is based on a fully variational theory of dense plasmas in equilibrium in which the neutrality of the Wigner-Seitz ion sphere is not required, contrary to the INFERNO model. We report on some recent progress in the VAAQP model and numerical code. Three important points of the virial theorem derivation are emphasized and explained. The virial theorem is also used as an important tool allowing us to check the formulas and numerical methods used in the code. Applications of the VAAQP code are shown using as an example the equation-of-state of beryllium in the warm dense matter regime. Comparisons with the INFERNO model, and with available experimental data on the principal Hugoniot are also presented.
Dense-Gas Dispersion in Complex Terrain (PREPRINT)
1993-05-01
public release; distribution unlimited. A dense-gas version of the ADPIC Lagrangian particle, advection-diffusion model has been developed to...of momentum principles along with the ideal gas law equation of state for a mixture of gases. ADPIC , which is generally run in conjunction with a...versatility of coupling the new dense-gas ADPIC with alternative wind flow models. The new dense-gas ADPIC has been used to simulate the atmospheric
Jow, Hong-Nian; Murfin, W.B.; Johnson, J.D.
1993-11-01
This report describes the source term estimation codes, XSORs. The codes are written for three pressurized water reactors (Surry, Sequoyah, and Zion) and two boiling water reactors (Peach Bottom and Grand Gulf). The ensemble of codes has been named ``XSOR``. The purpose of XSOR codes is to estimate the source terms which would be released to the atmosphere in severe accidents. A source term includes the release fractions of several radionuclide groups, the timing and duration of releases, the rates of energy release, and the elevation of releases. The codes have been developed by Sandia National Laboratories for the US Nuclear Regulatory Commission (NRC) in support of the NUREG-1150 program. The XSOR codes are fast running parametric codes and are used as surrogates for detailed mechanistic codes. The XSOR codes also provide the capability to explore the phenomena and their uncertainty which are not currently modeled by the mechanistic codes. The uncertainty distributions of input parameters may be used by an. XSOR code to estimate the uncertainty of source terms.
Greg Flach, Frank Smith
2014-05-14
DLLExternalCode is the a general dynamic-link library (DLL) interface for linking GoldSim (www.goldsim.com) with external codes. The overall concept is to use GoldSim as top level modeling software with interfaces to external codes for specific calculations. The DLLExternalCode DLL that performs the linking function is designed to take a list of code inputs from GoldSim, create an input file for the external application, run the external code, and return a list of outputs, read from files created by the external application, back to GoldSim. Instructions for creating the input file, running the external code, and reading the output are contained in an instructions file that is read and interpreted by the DLL.
Defeating the coding monsters.
Colt, Ross
2007-02-01
Accuracy in coding is rapidly becoming a required skill for military health care providers. Clinic staffing, equipment purchase decisions, and even reimbursement will soon be based on the coding data that we provide. Learning the complicated myriad of rules to code accurately can seem overwhelming. However, the majority of clinic visits in a typical outpatient clinic generally fall into two major evaluation and management codes, 99213 and 99214. If health care providers can learn the rules required to code a 99214 visit, then this will provide a 90% solution that can enable them to accurately code the majority of their clinic visits. This article demonstrates a step-by-step method to code a 99214 visit, by viewing each of the three requirements as a monster to be defeated.
Optimization of the lead probe neutron detector.
Ziegler, Lee; Ruiz, Carlos L.; Franklin, James Kenneth; Cooper, Gary Wayne; Nelson, Alan J.
2004-03-01
The lead probe neutron detector was originally designed by Spencer and Jacobs in 1965. The detector is based on lead activation due to the following neutron scattering reactions: {sup 207}Pb(n, n'){sup 207m}Pb and {sup 208}Pb(n, 2n){sup 207m}Pb. Delayed gammas from the metastable state of {sup 207m}Pb are counted using a plastic scintillator. The half-life of {sup 207m}Pb is 0.8 seconds. In the work reported here, MCNP was used to optimize the efficiency of the lead probe by suitably modifying the original geometry. A prototype detector was then built and tested. A 'layer cake' design was investigated in which thin (< 5 mm) layers of lead were sandwiched between thicker ({approx} 1 - 2 cm) layers of scintillator. An optimized 'layer cake' design had Figures of Merit (derived from the code) which were a factor of 3 greater than the original lead probe for DD neutrons, and a factor of 4 greater for DT neutrons, while containing 30% less lead. A smaller scale, 'proof of principle' prototype was built by Bechtel/Nevada to verify the code results. Its response to DD neutrons was measured using the DD dense plasma focus at Texas A&M and it conformed to the predicted performance. A voltage and discriminator sweep was performed to determine optimum sensitivity settings. It was determined that a calibration operating point could be obtained using a {sup 133}Ba 'bolt' as is the case with the original lead probe.
Edge compression techniques for visualization of dense directed graphs.
Dwyer, Tim; Henry Riche, Nathalie; Marriott, Kim; Mears, Christopher
2013-12-01
We explore the effectiveness of visualizing dense directed graphs by replacing individual edges with edges connected to 'modules'-or groups of nodes-such that the new edges imply aggregate connectivity. We only consider techniques that offer a lossless compression: that is, where the entire graph can still be read from the compressed version. The techniques considered are: a simple grouping of nodes with identical neighbor sets; Modular Decomposition which permits internal structure in modules and allows them to be nested; and Power Graph Analysis which further allows edges to cross module boundaries. These techniques all have the same goal--to compress the set of edges that need to be rendered to fully convey connectivity--but each successive relaxation of the module definition permits fewer edges to be drawn in the rendered graph. Each successive technique also, we hypothesize, requires a higher degree of mental effort to interpret. We test this hypothetical trade-off with two studies involving human participants. For Power Graph Analysis we propose a novel optimal technique based on constraint programming. This enables us to explore the parameter space for the technique more precisely than could be achieved with a heuristic. Although applicable to many domains, we are motivated by--and discuss in particular--the application to software dependency analysis.
Ultrafast visualization of the structural evolution of dense hydrogen towards warm dense matter
NASA Astrophysics Data System (ADS)
Fletcher, Luke
2016-10-01
Hot dense hydrogen far from equilibrium is ubiquitous in nature occurring during some of the most violent and least understood events in our universe such as during star formation, supernova explosions, and the creation of cosmic rays. It is also a state of matter important for applications in inertial confinement fusion research and in laser particle acceleration. Rapid progress occurred in recent years characterizing the high-pressure structural properties of dense hydrogen under static or dynamic compression. Here, we show that spectrally and angularly resolved x-ray scattering measure the thermodynamic properties of dense hydrogen and resolve the ultrafast evolution and relaxation towards thermodynamic equilibrium. These studies apply ultra-bright x-ray pulses from the Linac Coherent Light (LCLS) source. The interaction of rapidly heated cryogenic hydrogen with a high-peak power optical laser is visualized with intense LCLS x-ray pulses in a high-repetition rate pump-probe setting. We demonstrate that electron-ion coupling is affected by the small number of particles in the Debye screening cloud resulting in much slower ion temperature equilibration than predicted by standard theory. This work was supported by the DOE Office of Science, Fusion Energy Science under FWP 100182.
Codeword stabilized quantum codes: Algorithm and structure
NASA Astrophysics Data System (ADS)
Chuang, Isaac; Cross, Andrew; Smith, Graeme; Smolin, John; Zeng, Bei
2009-04-01
The codeword stabilized (CWS) quantum code formalism presents a unifying approach to both additive and nonadditive quantum error-correcting codes [IEEE Trans. Inf. Theory 55, 433 (2009)]. This formalism reduces the problem of constructing such quantum codes to finding a binary classical code correcting an error pattern induced by a graph state. Finding such a classical code can be very difficult. Here, we consider an algorithm which maps the search for CWS codes to a problem of identifying maximum cliques in a graph. While solving this problem is in general very hard, we provide three structure theorems which reduce the search space, specifying certain admissible and optimal ((n,K,d)) additive codes. In particular, we find that the re does not exist any ((7,3,3)) CWS code though the linear programming bound does not rule it out. The complexity of the CWS-search algorithm is compared with the contrasting method introduced by Aggarwal and Calderbank [IEEE Trans. Inf. Theory 54, 1700 (2008)].
GPU-enabled particle-particle particle-tree scheme for simulating dense stellar cluster system
NASA Astrophysics Data System (ADS)
Iwasawa, Masaki; Portegies Zwart, Simon; Makino, Junichiro
2015-07-01
We describe the implementation and performance of the (Particle-Particle Particle-Tree) scheme for simulating dense stellar systems. In , the force experienced by a particle is split into short-range and long-range contributions. Short-range forces are evaluated by direct summation and integrated with the fourth order Hermite predictor-corrector method with the block timesteps. For long-range forces, we use a combination of the Barnes-Hut tree code and the leapfrog integrator. The tree part of our simulation environment is accelerated using graphical processing units (GPU), whereas the direct summation is carried out on the host CPU. Our code gives excellent performance and accuracy for star cluster simulations with a large number of particles even when the core size of the star cluster is small.
Order in dense hydrogen at low temperatures
Edwards, B.; Ashcroft, N. W.
2004-01-01
By increase in density, impelled by pressure, the electronic energy bands in dense hydrogen attain significant widths. Nevertheless, arguments can be advanced suggesting that a physically consistent description of the general consequences of this electronic structure can still be constructed from interacting but state-dependent multipoles. These reflect, in fact self-consistently, a disorder-induced localization of electron states partially manifesting the effects of proton dynamics; they retain very considerable spatial inhomogeneity (as they certainly do in the molecular limit). This description, which is valid provided that an overall energy gap has not closed, leads at a mean-field level to the expected quadrupolar coupling, but also for certain structures to the eventual emergence of dipolar terms and their coupling when a state of broken charge symmetry is developed. A simple Hamiltonian incorporating these basic features then leads to a high-density, low-temperature phase diagram that appears to be in substantial agreement with experiment. In particular, it accounts for the fact that whereas the phase I–II phase boundary has a significant isotope dependence, the phase II–III boundary has very little. PMID:15028839
Superconductivity in dense carbon-based materials
NASA Astrophysics Data System (ADS)
Lu, Siyu; Liu, Hanyu; Naumov, Ivan I.; Meng, Sheng; Li, Yinwei; Tse, John S.; Yang, Bai; Hemley, Russell J.
2016-03-01
Guided by a simple strategy in search of new superconducting materials, we predict that high-temperature superconductivity can be realized in classes of high-density materials having strong sp3 chemical bonding and high lattice symmetry. We examine in detail sodalite carbon frameworks doped with simple metals such as Li, Na, and Al. Though such materials share some common features with doped diamond, their doping level is not limited, and the density of states at the Fermi level in them can be as high as that in the renowned Mg B2 . Together with other factors, this boosts the superconducting temperature (Tc) in the materials investigated to higher levels compared to doped diamond. For example, the Tc of sodalitelike Na C6 is predicted to be above 100 K. This phase and a series of other sodalite-based superconductors are predicted to be metastable phases but are dynamically stable. Owing to the rigid carbon framework of these and related dense carbon materials, these doped sodalite-based structures could be recoverable as potentially useful superconductors.
Transcriptional proofreading in dense RNA polymerase traffic
NASA Astrophysics Data System (ADS)
Sahoo, Mamata; Klumpp, Stefan
2011-12-01
The correction of errors during transcription involves the diffusive backward translocation (backtracking) of RNA polymerases (RNAPs) on the DNA. A trailing RNAP on the same template can interfere with backtracking as it progressively restricts the space that is available for backward translocation and thereby ratchets the backtracked RNAP forward. We analyze the resulting negative impact on proofreading theoretically using a driven lattice gas model of transcription under conditions of dense RNAP traffic. The fraction of errors that are corrected is calculated exactly for the case of a single RNAP; for multi-RNAP transcription, we use simulations and an analytical approximation and find a decrease with increasing traffic density. Moreover, we ask how the parameters of the system have to be set to keep down the impact of the interference of a trailing RNAP. Our analysis uncovers a surprisingly simple picture of the design of the error correction system: its efficiency is essentially determined by the rate for the initial backtracking step, while the value of the cleavage rate ensures that the correction mechanism remains efficient at high transcription rates. Finally, we argue that our analysis can also be applied to cases with transcription-translation coupling where the leading ribosome on the transcript assumes the role of the trailing RNAP.
Elemental nitrogen partitioning in dense interstellar clouds
Daranlot, Julien; Hincelin, Ugo; Bergeat, Astrid; Costes, Michel; Loison, Jean-Christophe; Wakelam, Valentine; Hickson, Kevin M.
2012-01-01
Many chemical models of dense interstellar clouds predict that the majority of gas-phase elemental nitrogen should be present as N2, with an abundance approximately five orders of magnitude less than that of hydrogen. As a homonuclear diatomic molecule, N2 is difficult to detect spectroscopically through infrared or millimeter-wavelength transitions. Therefore, its abundance is often inferred indirectly through its reaction product N2H+. Two main formation mechanisms, each involving two radical-radical reactions, are the source of N2 in such environments. Here we report measurements of the low temperature rate constants for one of these processes, the N + CN reaction, down to 56 K. The measured rate constants for this reaction, and those recently determined for two other reactions implicated in N2 formation, are tested using a gas-grain model employing a critically evaluated chemical network. We show that the amount of interstellar nitrogen present as N2 depends on the competition between its gas-phase formation and the depletion of atomic nitrogen onto grains. As the reactions controlling N2 formation are inefficient, we argue that N2 does not represent the main reservoir species for interstellar nitrogen. Instead, elevated abundances of more labile forms of nitrogen such as NH3 should be present on interstellar ices, promoting the eventual formation of nitrogen-bearing organic molecules. PMID:22689957
Order and instabilities in dense bacterial colonies
NASA Astrophysics Data System (ADS)
Tsimring, Lev
2012-02-01
The structure of cell colonies is governed by the interplay of many physical and biological factors, ranging from properties of surrounding media to cell-cell communication and gene expression in individual cells. The biomechanical interactions arising from the growth and division of individual cells in confined environments are ubiquitous, yet little work has focused on this fundamental aspect of colony formation. By combining experimental observations of growing monolayers of non-motile strain of bacteria Escherichia coli in a shallow microfluidic chemostat with discrete-element simulations and continuous theory, we demonstrate that expansion of a dense colony leads to rapid orientational alignment of rod-like cells. However, in larger colonies, anisotropic compression may lead to buckling instability which breaks perfect nematic order. Furthermore, we found that in shallow cavities feedback between cell growth and mobility in a confined environment leads to a novel cell streaming instability. Joint work with W. Mather, D. Volfson, O. Mondrag'on-Palomino, T. Danino, S. Cookson, and J. Hasty (UCSD) and D. Boyer, S. Orozco-Fuentes (UNAM, Mexico).
Dense fluids—New aspects and results
NASA Astrophysics Data System (ADS)
Franck, E. U.
1986-05-01
Dense fluids at elevated and supercritical temperatures find increased interest in science and technology. In this presentation special attention is given to binary mixtures with polar components. Methods and results of experiments with such high pressure-high temperature fluids are described. Far infrared spectra of CHCIF 2 and CHF 3 give indications of the types of molecular motion in the supercritical phases. “Enhancement factors” for the solubility of a solid solute like caffeine in high pressure CO 2 have been determined spectroscopically. The phase diagrams in the pressure-temperature-composition space and critical curves for water combined with nitrogen, oxygen, methane and helium have been measured recently to 2500 bar and 450°C. A “rational” equation of state permits calculation of critical curves and binodal surfaces for such systems. An extended investigation was made with the ternary system water-methane-sodium chloride. Small additions of salt shift critical curves by 100°C and more to higher temperatures. In water-methane mixtures between 400 and 500°C and at 1000 bar “supercritical flames” and “hydrothermal combustion” could be produced with injected oxygen. Binary liquid mixtures of cesium and cesium hydride to elevated hydrogen pressure and to 800°C show the phenomena of continuous transition from metal to ionic fluids. Electric conductance measurements in the whole range of concentrations are presented and discussed.
Thermochemistry of dense hydrous magnesium silicates
NASA Technical Reports Server (NTRS)
Bose, Kunal; Burnley, Pamela; Navrotsky, Alexandra
1994-01-01
Recent experimental investigations under mantle conditions have identified a suite of dense hydrous magnesium silicate (DHMS) phases that could be conduits to transport water to at least the 660 km discontinuity via mature, relatively cold, subducting slabs. Water released from successive dehydration of these phases during subduction could be responsible for deep focus earthquakes, mantle metasomatism and a host of other physico-chemical processes central to our understanding of the earth's deep interior. In order to construct a thermodynamic data base that can delineate and predict the stability ranges for DHMS phases, reliable thermochemical and thermophysical data are required. One of the major obstacles in calorimetric studies of phases synthesized under high pressure conditions has been limitation due to the small (less than 5 mg) sample mass. Our refinement of calorimeter techniques now allow precise determination of enthalpies of solution of less than 5 mg samples of hydrous magnesium silicates. For example, high temperature solution calorimetry of natural talc (Mg(0.99) Fe(0.01)Si4O10(OH)2), periclase (MgO) and quartz (SiO2) yield enthalpies of drop solution at 1044 K to be 592.2 (2.2), 52.01 (0.12) and 45.76 (0.4) kJ/mol respectively. The corresponding enthalpy of formation from oxides at 298 K for talc is minus 5908.2 kJ/mol agreeing within 0.1 percent to literature values.
Dynamic shear jamming in dense suspensions
NASA Astrophysics Data System (ADS)
Peters, Ivo; Majumdar, Sayantan; Jaeger, Heinrich
Shear a dense suspension of cornstarch and water hard enough, and the system seems to solidify as a result. Indeed, previous studies have shown that a jamming front propagates through these systems until, after interaction with boundaries, a jammed solid spans across the system. Because these fully jammed states are only observed if the deformation is fast enough, a natural question to ask is how this phenomenon is related to the discontinuous shear thickening (DST) behavior of these suspensions. We present a single experimental setup in which we on the one hand can measure the rheological flow curves, but on the other hand also determine if the suspension is in a jammed state. This we do by using a large-gap cylindrical Couette cell, where we control the applied shear stress using a rheometer. Because our setup only applies shear, the jammed states we observe are shear-jammed, and cannot be a result of an overall increase in packing fraction. We probe for jammed states by dropping small steel spheres on the surface of the suspension, and identify elastic responses. Our experiments reveal a clear distinction between the onset of DST and Shear-Jammed states, which have qualitatively different trends with packing fraction close to the isotropic jamming point.
Dense colloidal fluids form denser amorphous sediments
Liber, Shir R.; Borohovich, Shai; Butenko, Alexander V.; Schofield, Andrew B.; Sloutskin, Eli
2013-01-01
We relate, by simple analytical centrifugation experiments, the density of colloidal fluids with the nature of their randomly packed solid sediments. We demonstrate that the most dilute fluids of colloidal hard spheres form loosely packed sediments, where the volume fraction of the particles approaches in frictional systems the random loose packing limit, φRLP = 0.55. The dense fluids of the same spheres form denser sediments, approaching the so-called random close packing limit, φRCP = 0.64. Our experiments, where particle sedimentation in a centrifuge is sufficiently rapid to avoid crystallization, demonstrate that the density of the sediments varies monotonically with the volume fraction of the initial suspension. We reproduce our experimental data by simple computer simulations, where structural reorganizations are prohibited, such that the rate of sedimentation is irrelevant. This suggests that in colloidal systems, where viscous forces dominate, the structure of randomly close-packed and randomly loose-packed sediments is determined by the well-known structure of the initial fluids of simple hard spheres, provided that the crystallization is fully suppressed. PMID:23530198
Synthesis of dense energetic materials. Annual report
Coon, C.
1982-07-01
The objective of the research described in the report is to synthesize new, dense, stable, highly energetic materials which will ultimately be a candidates for improved explosive and propellant formulations. Following strict guidelines pertaining to energy, density, stability, etc. Specific target molecules were chosen that appear to possess the improved properties desired for new energetic materials. This report summarizes research on the synthesis of these target materials from February 1981 to January 1982. The following compounds were synthesized: 5,5'-diamino-3,3'-bioxadiazole(1,2,4); 5,5'-bis(trichloromethyl)-3,3'-di(1,2,4-oxadiazole); 3,3'-bi(1,2,4-oxadiazole); ethylene tetranitramine (ETNA); N,N-bis(methoxymethyl)acetamide; N,N-bis(chloromethyl)acetamide; 7,8-dimethylglycoluril; Synthesis of 3,9-Di(t-butyl)-13,14-dimethyl-tetracyclo-(5,5,2,0/sup 5/ /sup 13/ 0/sup 11/ /sup 14/)-1,3,5,7,9,11-hexaaza-6,12-dioxotetradecane.
Droplet formation and scaling in dense suspensions
Miskin, Marc Z.; Jaeger, Heinrich M.
2012-01-01
When a dense suspension is squeezed from a nozzle, droplet detachment can occur similar to that of pure liquids. While in pure liquids the process of droplet detachment is well characterized through self-similar profiles and known scaling laws, we show here the simple presence of particles causes suspensions to break up in a new fashion. Using high-speed imaging, we find that detachment of a suspension drop is described by a power law; specifically we find the neck minimum radius, rm, scales like near breakup at time τ = 0. We demonstrate data collapse in a variety of particle/liquid combinations, packing fractions, solvent viscosities, and initial conditions. We argue that this scaling is a consequence of particles deforming the neck surface, thereby creating a pressure that is balanced by inertia, and show how it emerges from topological constraints that relate particle configurations with macroscopic Gaussian curvature. This new type of scaling, uniquely enforced by geometry and regulated by the particles, displays memory of its initial conditions, fails to be self-similar, and has implications for the pressure given at generic suspension interfaces. PMID:22392979
Activated Dynamics in Dense Model Nanocomposites
NASA Astrophysics Data System (ADS)
Xie, Shijie; Schweizer, Kenneth
The nonlinear Langevin equation approach is applied to investigate the ensemble-averaged activated dynamics of small molecule liquids (or disconnected segments in a polymer melt) in dense nanocomposites under model isobaric conditions where the spherical nanoparticles are dynamically fixed. Fully thermalized and quenched-replica integral equation theory methods are employed to investigate the influence on matrix dynamics of the equilibrium and nonequilibrium nanocomposite structure, respectively. In equilibrium, the miscibility window can be narrow due to depletion and bridging attraction induced phase separation which limits the study of activated dynamics to regimes where the barriers are relatively low. In contrast, by using replica integral equation theory, macroscopic demixing is suppressed, and the addition of nanoparticles can induce much slower activated matrix dynamics which can be studied over a wide range of pure liquid alpha relaxation times, interfacial attraction strengths and ranges, particle sizes and loadings, and mixture microstructures. Numerical results for the mean activated relaxation time, transient localization length, matrix elasticity and kinetic vitrification in the nanocomposite will be presented.
Linear dependence of surface expansion speed on initial plasma temperature in warm dense matter
NASA Astrophysics Data System (ADS)
Bang, W.; Albright, B. J.; Bradley, P. A.; Vold, E. L.; Boettger, J. C.; Fernández, J. C.
2016-07-01
Recent progress in laser-driven quasi-monoenergetic ion beams enabled the production of uniformly heated warm dense matter. Matter heated rapidly with this technique is under extreme temperatures and pressures, and promptly expands outward. While the expansion speed of an ideal plasma is known to have a square-root dependence on temperature, computer simulations presented here show a linear dependence of expansion speed on initial plasma temperature in the warm dense matter regime. The expansion of uniformly heated 1–100 eV solid density gold foils was modeled with the RAGE radiation-hydrodynamics code, and the average surface expansion speed was found to increase linearly with temperature. The origin of this linear dependence is explained by comparing predictions from the SESAME equation-of-state tables with those from the ideal gas equation-of-state. These simulations offer useful insight into the expansion of warm dense matter and motivate the application of optical shadowgraphy for temperature measurement.
Linear dependence of surface expansion speed on initial plasma temperature in warm dense matter.
Bang, W; Albright, B J; Bradley, P A; Vold, E L; Boettger, J C; Fernández, J C
2016-07-12
Recent progress in laser-driven quasi-monoenergetic ion beams enabled the production of uniformly heated warm dense matter. Matter heated rapidly with this technique is under extreme temperatures and pressures, and promptly expands outward. While the expansion speed of an ideal plasma is known to have a square-root dependence on temperature, computer simulations presented here show a linear dependence of expansion speed on initial plasma temperature in the warm dense matter regime. The expansion of uniformly heated 1-100 eV solid density gold foils was modeled with the RAGE radiation-hydrodynamics code, and the average surface expansion speed was found to increase linearly with temperature. The origin of this linear dependence is explained by comparing predictions from the SESAME equation-of-state tables with those from the ideal gas equation-of-state. These simulations offer useful insight into the expansion of warm dense matter and motivate the application of optical shadowgraphy for temperature measurement.
Linear dependence of surface expansion speed on initial plasma temperature in warm dense matter
Bang, Woosuk; Albright, Brian James; Bradley, Paul Andrew; ...
2016-07-12
Recent progress in laser-driven quasi-monoenergetic ion beams enabled the production of uniformly heated warm dense matter. Matter heated rapidly with this technique is under extreme temperatures and pressures, and promptly expands outward. While the expansion speed of an ideal plasma is known to have a square-root dependence on temperature, computer simulations presented here show a linear dependence of expansion speed on initial plasma temperature in the warm dense matter regime. The expansion of uniformly heated 1–100 eV solid density gold foils was modeled with the RAGE radiation-hydrodynamics code, and the average surface expansion speed was found to increase linearly withmore » temperature. The origin of this linear dependence is explained by comparing predictions from the SESAME equation-of-state tables with those from the ideal gas equation-of-state. In conclusion, these simulations offer useful insight into the expansion of warm dense matter and motivate the application of optical shadowgraphy for temperature measurement.« less
Linear dependence of surface expansion speed on initial plasma temperature in warm dense matter
Bang, Woosuk; Albright, Brian James; Bradley, Paul Andrew; Vold, Erik Lehman; Boettger, Jonathan Carl; Fernández, Juan Carlos
2016-07-12
Recent progress in laser-driven quasi-monoenergetic ion beams enabled the production of uniformly heated warm dense matter. Matter heated rapidly with this technique is under extreme temperatures and pressures, and promptly expands outward. While the expansion speed of an ideal plasma is known to have a square-root dependence on temperature, computer simulations presented here show a linear dependence of expansion speed on initial plasma temperature in the warm dense matter regime. The expansion of uniformly heated 1–100 eV solid density gold foils was modeled with the RAGE radiation-hydrodynamics code, and the average surface expansion speed was found to increase linearly with temperature. The origin of this linear dependence is explained by comparing predictions from the SESAME equation-of-state tables with those from the ideal gas equation-of-state. In conclusion, these simulations offer useful insight into the expansion of warm dense matter and motivate the application of optical shadowgraphy for temperature measurement.
Peter, Frank J.; Dalton, Larry J.; Plummer, David W.
2002-01-01
A new class of mechanical code comparators is described which have broad potential for application in safety, surety, and security applications. These devices can be implemented as micro-scale electromechanical systems that isolate a secure or otherwise controlled device until an access code is entered. This access code is converted into a series of mechanical inputs to the mechanical code comparator, which compares the access code to a pre-input combination, entered previously into the mechanical code comparator by an operator at the system security control point. These devices provide extremely high levels of robust security. Being totally mechanical in operation, an access control system properly based on such devices cannot be circumvented by software attack alone.
NASA Technical Reports Server (NTRS)
Solomon, G.
1992-01-01
A new investigation shows that, starting from the BCH (21,15;3) code represented as a 7 x 3 matrix and adding a row and column to add even parity, one obtains an 8 x 4 matrix (32,15;8) code. An additional dimension is obtained by specifying odd parity on the rows and even parity on the columns, i.e., adjoining to the 8 x 4 matrix, the matrix, which is zero except for the fourth column (of all ones). Furthermore, any seven rows and three columns will form the BCH (21,15;3) code. This box code has the same weight structure as the quadratic residue and BCH codes of the same dimensions. Whether there exists an algebraic isomorphism to either code is as yet unknown.
A Construction of Lossy Source Code Using LDPC Matrices
NASA Astrophysics Data System (ADS)
Miyake, Shigeki; Muramatsu, Jun
Research into applying LDPC code theory, which is used for channel coding, to source coding has received a lot of attention in several research fields such as distributed source coding. In this paper, a source coding problem with a fidelity criterion is considered. Matsunaga et al. and Martinian et al. constructed a lossy code under the conditions of a binary alphabet, a uniform distribution, and a Hamming measure of fidelity criterion. We extend their results and construct a lossy code under the extended conditions of a binary alphabet, a distribution that is not necessarily uniform, and a fidelity measure that is bounded and additive and show that the code can achieve the optimal rate, rate-distortion function. By applying a formula for the random walk on lattice to the analysis of LDPC matrices on Zq, where q is a prime number, we show that results similar to those for the binary alphabet condition hold for Zq, the multiple alphabet condition.
The chemistry of phosphorus in dense interstellar clouds
NASA Technical Reports Server (NTRS)
Thorne, L. R.; Anicich, V. G.; Prasad, S. S.; Huntress, W. T., Jr.
1984-01-01
Laboratory experiments show that the ion-molecule chemistry of phosphorus is significantly different from that of nitrogen in dense interstellar clouds. The PH3 molecule is not readily formed by gas-phase, ion-molecule reactions in these regions. Laboratory results used in a simple kinetic model indicate that the most abundant molecule containing phosphorus in dense clouds is PO.
Mining connected global and local dense subgraphs for bigdata
NASA Astrophysics Data System (ADS)
Wu, Bo; Shen, Haiying
2016-01-01
The problem of discovering connected dense subgraphs of natural graphs is important in data analysis. Discovering dense subgraphs that do not contain denser subgraphs or are not contained in denser subgraphs (called significant dense subgraphs) is also critical for wide-ranging applications. In spite of many works on discovering dense subgraphs, there are no algorithms that can guarantee the connectivity of the returned subgraphs or discover significant dense subgraphs. Hence, in this paper, we define two subgraph discovery problems to discover connected and significant dense subgraphs, propose polynomial-time algorithms and theoretically prove their validity. We also propose an algorithm to further improve the time and space efficiency of our basic algorithm for discovering significant dense subgraphs in big data by taking advantage of the unique features of large natural graphs. In the experiments, we use massive natural graphs to evaluate our algorithms in comparison with previous algorithms. The experimental results show the effectiveness of our algorithms for the two problems and their efficiency. This work is also the first that reveals the physical significance of significant dense subgraphs in natural graphs from different domains.
Generating code adapted for interlinking legacy scalar code and extended vector code
Gschwind, Michael K
2013-06-04
Mechanisms for intermixing code are provided. Source code is received for compilation using an extended Application Binary Interface (ABI) that extends a legacy ABI and uses a different register configuration than the legacy ABI. First compiled code is generated based on the source code, the first compiled code comprising code for accommodating the difference in register configurations used by the extended ABI and the legacy ABI. The first compiled code and second compiled code are intermixed to generate intermixed code, the second compiled code being compiled code that uses the legacy ABI. The intermixed code comprises at least one call instruction that is one of a call from the first compiled code to the second compiled code or a call from the second compiled code to the first compiled code. The code for accommodating the difference in register configurations is associated with the at least one call instruction.
NASA Technical Reports Server (NTRS)
Shapiro, Wilbur
1996-01-01
This is an overview of new and updated industrial codes for seal design and testing. GCYLT (gas cylindrical seals -- turbulent), SPIRALI (spiral-groove seals -- incompressible), KTK (knife to knife) Labyrinth Seal Code, and DYSEAL (dynamic seal analysis) are covered. CGYLT uses G-factors for Poiseuille and Couette turbulence coefficients. SPIRALI is updated to include turbulence and inertia, but maintains the narrow groove theory. KTK labyrinth seal code handles straight or stepped seals. And DYSEAL provides dynamics for the seal geometry.
Phonological coding during reading
Leinenger, Mallorie
2014-01-01
The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early (pre-lexical) or that phonological codes come online late (post-lexical)) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eyetracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model (Van Order, 1987), dual-route model (e.g., Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001), parallel distributed processing model (Seidenberg & McClelland, 1989)) are discussed. PMID:25150679
Reid, R.L.; Barrett, R.J.; Brown, T.G.; Gorker, G.E.; Hooper, R.J.; Kalsi, S.S.; Metzler, D.H.; Peng, Y.K.M.; Roth, K.E.; Spampinato, P.T.
1985-03-01
The FEDC Tokamak Systems Code calculates tokamak performance, cost, and configuration as a function of plasma engineering parameters. This version of the code models experimental tokamaks. It does not currently consider tokamak configurations that generate electrical power or incorporate breeding blankets. The code has a modular (or subroutine) structure to allow independent modeling for each major tokamak component or system. A primary benefit of modularization is that a component module may be updated without disturbing the remainder of the systems code as long as the imput to or output from the module remains unchanged.
Domino, Stefan; Luketa-Hanlin, Anay; Gallegos, Carlos
2006-10-27
FAA Smoke Transport Code, a physics-based Computational Fluid Dynamics tool, which couples heat, mass, and momentum transfer, has been developed to provide information on smoke transport in cargo compartments with various geometries and flight conditions. The software package contains a graphical user interface for specification of geometry and boundary conditions, analysis module for solving the governing equations, and a post-processing tool. The current code was produced by making substantial improvements and additions to a code obtained from a university. The original code was able to compute steady, uniform, isothermal turbulent pressurization. In addition, a preprocessor and postprocessor were added to arrive at the current software package.
NASA Technical Reports Server (NTRS)
Garabedian, P. R.
1979-01-01
Computer codes for the design and analysis of transonic airfoils are considered. The design code relies on the method of complex characteristics in the hodograph plane to construct shockless airfoil. The analysis code uses artificial viscosity to calculate flows with weak shock waves at off-design conditions. Comparisons with experiments show that an excellent simulation of two dimensional wind tunnel tests is obtained. The codes have been widely adopted by the aircraft industry as a tool for the development of supercritical wing technology.
Sensitivity of coded mask telescopes
Skinner, Gerald K
2008-05-20
Simple formulas are often used to estimate the sensitivity of coded mask x-ray or gamma-ray telescopes, but these are strictly applicable only if a number of basic assumptions are met. Complications arise, for example, if a grid structure is used to support the mask elements, if the detector spatial resolution is not good enough to completely resolve all the detail in the shadow of the mask, or if any of a number of other simplifying conditions are not fulfilled. We derive more general expressions for the Poisson-noise-limited sensitivity of astronomical telescopes using the coded mask technique, noting explicitly in what circumstances they are applicable. The emphasis is on using nomenclature and techniques that result in simple and revealing results. Where no convenient expression is available a procedure is given that allows the calculation of the sensitivity. We consider certain aspects of the optimization of the design of a coded mask telescope and show that when the detector spatial resolution and the mask to detector separation are fixed, the best source location accuracy is obtained when the mask elements are equal in size to the detector pixels.
Sensitivity of coded mask telescopes.
Skinner, Gerald K
2008-05-20
Simple formulas are often used to estimate the sensitivity of coded mask x-ray or gamma-ray telescopes, but these are strictly applicable only if a number of basic assumptions are met. Complications arise, for example, if a grid structure is used to support the mask elements, if the detector spatial resolution is not good enough to completely resolve all the detail in the shadow of the mask, or if any of a number of other simplifying conditions are not fulfilled. We derive more general expressions for the Poisson-noise-limited sensitivity of astronomical telescopes using the coded mask technique, noting explicitly in what circumstances they are applicable. The emphasis is on using nomenclature and techniques that result in simple and revealing results. Where no convenient expression is available a procedure is given that allows the calculation of the sensitivity. We consider certain aspects of the optimization of the design of a coded mask telescope and show that when the detector spatial resolution and the mask to detector separation are fixed, the best source location accuracy is obtained when the mask elements are equal in size to the detector pixels.
SUPPORTED DENSE CERAMIC MEMBRANES FOR OXYGEN SEPARATION
Timothy L. Ward
2003-03-01
This project addresses the need for reliable fabrication methods of supported thin/thick dense ceramic membranes for oxygen separation. Some ceramic materials that possess mixed conductivity (electronic and ionic) at high temperature have the potential to permeate oxygen with perfect selectivity, making them very attractive for oxygen separation and membrane reactor applications. In order to maximize permeation rates at the lowest possible temperatures, it is desirable to minimize diffusional limitations within the ceramic by reducing the thickness of the ceramic membrane, preferably to thicknesses of 10 {micro}m or thinner. It has proven to be very challenging to reliably fabricate dense, defect-free ceramic membrane layers of such thickness. In this project we are investigating the use of ultrafine SrCo{sub 0.5}FeO{sub x} (SCFO) powders produced by aerosol pyrolysis to fabricate such supported membranes. SrCo{sub 0.5}FeO{sub x} is a ceramic composition that has been shown to have desirable oxygen permeability, as well as good chemical stability in the reducing environments that are encountered in some important applications. Our approach is to use a doctor blade procedure to deposit pastes prepared from the aerosol-derived SCFO powders onto porous SCFO supports. We have previously shown that membrane layers deposited from the aerosol powders can be sintered to high density without densification of the underlying support. However, these membrane layers contained large-scale cracks and open areas, making them unacceptable for membrane purposes. In the past year, we have refined the paste formulations based on guidance from the ceramic tape casting literature. We have identified a multicomponent organic formulation utilizing castor oil as dispersant in a solvent of mineral spirits and isopropanol. Other additives were polyvinylbutyral as binder and dibutylphthalate as plasticizer. The nonaqueous formulation has superior wetting properties with the powder, and
Memory-efficient decoding of LDPC codes
NASA Technical Reports Server (NTRS)
Kwok-San Lee, Jason; Thorpe, Jeremy; Hawkins, Jon
2005-01-01
We present a low-complexity quantization scheme for the implementation of regular (3,6) LDPC codes. The quantization parameters are optimized to maximize the mutual information between the source and the quantized messages. Using this non-uniform quantized belief propagation algorithm, we have simulated that an optimized 3-bit quantizer operates with 0.2dB implementation loss relative to a floating point decoder, and an optimized 4-bit quantizer operates less than 0.1dB quantization loss.
Model For Dense Molecular Cloud Cores
NASA Technical Reports Server (NTRS)
Doty, Steven D.; Neufeld, David A.
1997-01-01
We present a detailed theoretical model for the thermal balance, chemistry, and radiative transfer within quiescent dense molecular cloud cores that contain a central protostar. In the interior of such cores, we expect the dust and gas temperatures to be well coupled, while in the outer regions CO rotational emissions dominate the gas cooling and the predicted gas temperature lies significantly below the dust temperature. Large spatial variations in the gas temperature are expected to affect the gas phase chemistry dramatically; in particular, the predicted water abundance varies by more than a factor of 1000 within cloud cores that contain luminous protostars. Based upon our predictions for the thermal and chemical structure of cloud cores, we have constructed self-consistent radiative transfer models to compute the line strengths and line profiles for transitions of (12)CO, (13)CO, C(18)O, ortho- and para-H2(16)O, ortho- and para-H2(18)O, and O I. We carried out a general parameter study to determine the dependence of the model predictions upon the parameters assumed for the source. We expect many of the far-infrared and submillimeter rotational transitions of water to be detectable either in emission or absorption with the use of the Infrared Space Observatory (ISO) and the Submillimeter Wave Astronomy Satellite. Quiescent, radiatively heated hot cores are expected to show low-gain maser emission in the 183 GHz 3(sub 13)-2(sub 20) water line, such as has been observed toward several hot core regions using ground-based telescopes. We predict the (3)P(sub l) - (3)P(sub 2) fine-structure transition of atomic oxygen near 63 micron to be in strong absorption against the continuum for many sources. Our model can also account successfully for recent ISO observations of absorption in rovibrational transitions of water toward the source AFGL 2591.
The chemistry of dense interstellar clouds
NASA Technical Reports Server (NTRS)
Irvine, W. M.
1991-01-01
The basic theme of this program is the study of molecular complexity and evolution in interstellar and circumstellar clouds incorporating the biogenic elements. Recent results include the identification of a new astronomical carbon-chain molecule, C4Si. This species was detected in the envelope expelled from the evolved star IRC+10216 in observations at the Nobeyama Radio Observatory in Japan. C4Si is the carrier of six unidentified lines which had previously been observed. This detection reveals the existence of a new series of carbon-chain molecules, C sub n Si (n equals 1, 2, 4). Such molecules may well be formed from the reaction of Si(+) with acetylene and acetylene derivatives. Other recent research has concentrated on the chemical composition of the cold, dark interstellar clouds, the nearest dense molecular clouds to the solar system. Such regions have very low kinetic temperatures, on the order of 10 K, and are known to be formation sites for solar-type stars. We have recently identified for the first time in such regions the species of H2S, NO, HCOOH (formic acid). The H2S abundance appears to exceed that predicted by gas-phase models of ion-molecule chemistry, perhaps suggesting the importance of synthesis on grain surfaces. Additional observations in dark clouds have studied the ratio of ortho- to para-thioformaldehyde. Since this ratio is expected to be unaffected by both radiative and ordinary collisional processes in the cloud, it may well reflect the formation conditions for this molecule. The ratio is observed to depart from that expected under conditions of chemical equilibrium at formation, perhaps reflecting efficient interchange between cold dust grains in the gas phase.
Perceptually-Based Adaptive JPEG Coding
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Rosenholtz, Ruth; Null, Cynthia H. (Technical Monitor)
1996-01-01
An extension to the JPEG standard (ISO/IEC DIS 10918-3) allows spatial adaptive coding of still images. As with baseline JPEG coding, one quantization matrix applies to an entire image channel, but in addition the user may specify a multiplier for each 8 x 8 block, which multiplies the quantization matrix, yielding the new matrix for the block. MPEG 1 and 2 use much the same scheme, except there the multiplier changes only on macroblock boundaries. We propose a method for perceptual optimization of the set of multipliers. We compute the perceptual error for each block based upon DCT quantization error adjusted according to contrast sensitivity, light adaptation, and contrast masking, and pick the set of multipliers which yield maximally flat perceptual error over the blocks of the image. We investigate the bitrate savings due to this adaptive coding scheme and the relative importance of the different sorts of masking on adaptive coding.
Advanced Imaging Optics Utilizing Wavefront Coding.
Scrymgeour, David; Boye, Robert; Adelsberger, Kathleen
2015-06-01
Image processing offers a potential to simplify an optical system by shifting some of the imaging burden from lenses to the more cost effective electronics. Wavefront coding using a cubic phase plate combined with image processing can extend the system's depth of focus, reducing many of the focus-related aberrations as well as material related chromatic aberrations. However, the optimal design process and physical limitations of wavefront coding systems with respect to first-order optical parameters and noise are not well documented. We examined image quality of simulated and experimental wavefront coded images before and after reconstruction in the presence of noise. Challenges in the implementation of cubic phase in an optical system are discussed. In particular, we found that limitations must be placed on system noise, aperture, field of view and bandwidth to develop a robust wavefront coded system.
Energy-Efficient Channel Coding Strategy for Underwater Acoustic Networks.
Barreto, Grasielli; Simão, Daniel H; Pellenz, Marcelo E; Souza, Richard D; Jamhour, Edgard; Penna, Manoel C; Brante, Glauber; Chang, Bruno S
2017-03-31
Underwater acoustic networks (UAN) allow for efficiently exploiting and monitoring the sub-aquatic environment. These networks are characterized by long propagation delays, error-prone channels and half-duplex communication. In this paper, we address the problem of energy-efficient communication through the use of optimized channel coding parameters. We consider a two-layer encoding scheme employing forward error correction (FEC) codes and fountain codes (FC) for UAN scenarios without feedback channels. We model and evaluate the energy consumption of different channel coding schemes for a K-distributed multipath channel. The parameters of the FEC encoding layer are optimized by selecting the optimal error correction capability and the code block size. The results show the best parameter choice as a function of the link distance and received signal-to-noise ratio.
Nasrabadi, M. N. Sepiani, M.
2015-03-30
Production of medical radioisotopes is one of the most important tasks in the field of nuclear technology. These radioactive isotopes are mainly produced through variety nuclear process. In this research, excitation functions and nuclear reaction mechanisms are studied for simulation of production of these radioisotopes in the TALYS, EMPIRE and LISE++ reaction codes, then parameters and different models of nuclear level density as one of the most important components in statistical reaction models are adjusted for optimum production of desired radioactive yields.
Fast H.264/AVC FRExt intra coding using belief propagation.
Milani, Simone
2011-01-01
In the H.264/AVC FRExt coder, the coding performance of Intra coding significantly overcomes the previous still image coding standards, like JPEG2000, thanks to a massive use of spatial prediction. Unfortunately, the adoption of an extensive set of predictors induces a significant increase of the computational complexity required by the rate-distortion optimization routine. The paper presents a complexity reduction strategy that aims at reducing the computational load of the Intra coding with a small loss in the compression performance. The proposed algorithm relies on selecting a reduced set of prediction modes according to their probabilities, which are estimated adopting a belief-propagation procedure. Experimental results show that the proposed method permits saving up to 60 % of the coding time required by an exhaustive rate-distortion optimization method with a negligible loss in performance. Moreover, it permits an accurate control of the computational complexity unlike other methods where the computational complexity depends upon the coded sequence.
Adaptive down-sampling video coding
NASA Astrophysics Data System (ADS)
Wang, Ren-Jie; Chien, Ming-Chen; Chang, Pao-Chi
2010-01-01
Down-sampling coding, which sub-samples the image and encodes the smaller sized images, is one of the solutions to raise the image quality at insufficiently high rates. In this work, we propose an Adaptive Down-Sampling (ADS) coding for H.264/AVC. The overall system distortion can be analyzed as the sum of the down-sampling distortion and the coding distortion. The down-sampling distortion is mainly the loss of the high frequency components that is highly dependent of the spatial difference. The coding distortion can be derived from the classical Rate-Distortion theory. For a given rate and a video sequence, the optimum down-sampling resolution-ratio can be derived by utilizing the optimum theory toward minimizing the system distortion based on the models of the two distortions. This optimal resolution-ratio is used in both down-sampling and up-sampling processes in ADS coding scheme. As a result, the rate-distortion performance of ADS coding is always higher than the fixed ratio coding or H.264/AVC by 2 to 4 dB at low to medium rates.
Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding
Wu, Yueying; Jia, Kebin; Gao, Guandong
2016-01-01
In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content change (CC). Secondly, a CU coding tree probability model is proposed for modeling and predicting CU distribution. Eventually, a CU coding tree probability update is proposed, aiming to address probabilistic model distortion problems caused by CC. Experimental results show that the proposed low complexity CU coding tree mechanism significantly reduces encoding time by 27% for lossy coding and 42% for visually lossless coding and lossless coding. The proposed low complexity CU coding tree mechanism devotes to improving coding performance under various application conditions. PMID:26999741
Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding.
Gao, Yuan; Liu, Pengyu; Wu, Yueying; Jia, Kebin; Gao, Guandong
2016-01-01
In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content change (CC). Secondly, a CU coding tree probability model is proposed for modeling and predicting CU distribution. Eventually, a CU coding tree probability update is proposed, aiming to address probabilistic model distortion problems caused by CC. Experimental results show that the proposed low complexity CU coding tree mechanism significantly reduces encoding time by 27% for lossy coding and 42% for visually lossless coding and lossless coding. The proposed low complexity CU coding tree mechanism devotes to improving coding performance under various application conditions.
ERIC Educational Resources Information Center
Million, June
2004-01-01
In this article, the author discusses an e-mail survey of principals from across the country regarding whether or not their school had a formal staff dress code. The results indicate that most did not have a formal dress code, but agreed that professional dress for teachers was not only necessary, but showed respect for the school and had a…
Lichenase and coding sequences
Li, Xin-Liang; Ljungdahl, Lars G.; Chen, Huizhong
2000-08-15
The present invention provides a fungal lichenase, i.e., an endo-1,3-1,4-.beta.-D-glucanohydrolase, its coding sequence, recombinant DNA molecules comprising the lichenase coding sequences, recombinant host cells and methods for producing same. The present lichenase is from Orpinomyces PC-2.
NASA Technical Reports Server (NTRS)
Whalen, Michael; Schumann, Johann; Fischer, Bernd
2002-01-01
Code certification is a lightweight approach to demonstrate software quality on a formal level. Its basic idea is to require producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates which can be checked independently. Since code certification uses the same underlying technology as program verification, it also requires many detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding theses annotations to the code is time-consuming and error-prone. We address this problem by combining code certification with automatic program synthesis. We propose an approach to generate simultaneously, from a high-level specification, code and all annotations required to certify generated code. Here, we describe a certification extension of AUTOBAYES, a synthesis tool which automatically generates complex data analysis programs from compact specifications. AUTOBAYES contains sufficient high-level domain knowledge to generate detailed annotations. This allows us to use a general-purpose verification condition generator to produce a set of proof obligations in first-order logic. The obligations are then discharged using the automated theorem E-SETHEO. We demonstrate our approach by certifying operator safety for a generated iterative data classification program without manual annotation of the code.
Xie, Boyang; Tang, Kun; Cheng, Hua; Liu, Zhengyou; Chen, Shuqi; Tian, Jianguo
2017-02-01
Coding acoustic metasurfaces can combine simple logical bits to acquire sophisticated functions in wave control. The acoustic logical bits can achieve a phase difference of exactly π and a perfect match of the amplitudes for the transmitted waves. By programming the coding sequences, acoustic metasurfaces with various functions, including creating peculiar antenna patterns and waves focusing, have been demonstrated.
Computerized mega code recording.
Burt, T W; Bock, H C
1988-04-01
A system has been developed to facilitate recording of advanced cardiac life support mega code testing scenarios. By scanning a paper "keyboard" using a bar code wand attached to a portable microcomputer, the person assigned to record the scenario can easily generate an accurate, complete, timed, and typewritten record of the given situations and the obtained responses.
Pseudonoise code tracking loop
NASA Technical Reports Server (NTRS)
Laflame, D. T. (Inventor)
1980-01-01
A delay-locked loop is presented for tracking a pseudonoise (PN) reference code in an incoming communication signal. The loop is less sensitive to gain imbalances, which can otherwise introduce timing errors in the PN reference code formed by the loop.